FPGA Mining: Why cooling solutions matter - The Cryptonomist
FPGAs and Bitcoins: You're Too Late - Nandland: FPGA, VHDL ...
FPGA for mining: what trends will prevail in 2019
FPGA Mining : Bitcoin - reddit.com
Effects of ETC moving to SHA3
0xBitcoin might experience some benefits from ETC's recent decision to go to SHA3. For one thing, if ETC developers turn their minds toward SHA3 GPU mining, it seems likely that they'll be very interested in the optimization work already done by Lt. Tofu and Azlehria on Cosmic and Nabiki. But they may also spot additional optimizations, which could most likely be ported back into our miners! For another thing, since ETC has a larger community than 0xBitcoin's, it's likely that they have people with very diverse skills. As they turn their minds toward SHA3 on FPGAs, I anticipate they will be choosing FPGA boards and writing software for them that probably will not be very difficult to alter to mine 0xBitcoin! Practically speaking, this may result in a new era of mining where a respectable hash rate can be achieved with much less electricity expense. Basically, if people with EE or hardware development backgrounds in the ETC community pin down some of the variables involved, like development board, it seems very likely that we will be able to get FPGAs mining 0xBitcoin quickly. We recently exchanged some nice words with Alex Tsankov. He revealed he is doing work that may allow merge mining ETC and other projects like 0xBitcoin. I don't know how good this will be, but it's hard for me to see how this could be a bad thing. It's really great to have nice words from Alex, because he seems like a very smart dude who is very open to collaboration. I have joined the ETC Discord (edit: link removed for just-Reddit-things reasons, you'll have no trouble finding it) and will be watching for opportunities to tell people there about our miners if they don't know, and watching for info about their developments on SHA3 miners.
Source: https://www.facebook.com/electroneum/posts/2030562537205714 Hi Everyone! ALL ELECTRONEUM NODE OWNERS MUST UPDATE THEIR SOFTWARE BY BLOCK 324500 (approx. 36 hours from now – this is an URGENT UPDATE – PLEASE SHARE THIS INFORMATION) We have an urgent software update below for anyone who runs a full Electroneum Node. If you don’t know what a node is , don’t worry! You won’t need to do anything. We also have a VERY exciting update about an upcoming listing on a top 10 exchange. How will I mine Electroneum after this update? Instant Payment vendor API is open for BETA applicants.How can ETN change the world? Please note that nothing in this message refers to MOBILE MINING – we are referring to the underlying blockchain miners. Urgent Electroneum Node / RPC / Command Line Wallet Update ALL ELECTRONEUM NODE OWNERS MUST UPDATE THEIR SOFTWARE BY BLOCK 324500 (approx. 36 hours from now – this is an URGENT UPDATE – PLEASE SHARE THIS INFORMATION) https://github.com/electron…/electroneum/releases/…/v220.127.116.11 It’s only been a few short days since I made a video and said “our fork went well! We’re ready for 20m Users!”. The fork was a great success, from a technical standpoint. Unfortunately, we never got back the number of GPU miners that are needed to ensure our network runs smoothly and has stable block emission. A new phenomenon has emerged where a number of users are mining Electroneum in waves. They come on and then leave after a few hours in a coordinated manner to mine ETN in a completely selfish way. We can’t blame people for maximizing their profit, but we have not built up the amount of “hashing power” that is required to make this impossible and create the stability we need in the network. This has left us at risk. As such, we have to take urgent action to stabilise our network and protect the Electroneum community. Coinbene Listing Electroneum & our network stability We have formally agreed and signed contracts to be listed in July on the AWESOME, top 10, cryptocurrency exchange https://Coinbene.com & https://Coinbene.com.br Coinbene have 1.5m active users and are a GREAT fit for Electroneum. Their primary markets are Latin America and Asia – which fits perfectly with Electroneum’s customer base. They have seen enormous growth over the last few months and have been very positive about the Electroneum Project. Whilst this is great news, we will need much more hashing power to ensure we have network stability for our listing on this exchange, so we’ve taken the decision that we can’t wait any longer for GPU miners to return to us and we must run an urgent software update to re-introduce ASIC mining to Electroneum. This is a very positive move for Electroneum. A great deal of Bitcoin’s trust and appeal is from the enormous hashing power and distribution of miners on the network. Bitcoin & LiteCoin have embraced ASICs and we feel that it is the right thing for Electroneum to do the same. ASICS are becoming more prevalent, they cost considerably less to run than a GPU rig and use a fraction of the electricity. We are going to encourage more ASIC ownership and take our hashing rate up to (and beyond) the enormous levels of hashing power that we had before the May fork. There is a further development. The first generation of hardware called an FPGA miner is arriving during 2018 and they make ANTI-ASIC capabilities a thing of the past, as they circumvent the slow delivery time of new ASICs by being re-programmable. If we are ready to embrace these rather than fight them, our network hashing power is increased further and our network stability and security is further enhanced. Because ASICS run cooler, quieter and use a fraction of GPU rig power, they are suitable for MORE people to run in their homes. If you are interested, a search of “Cryptonight ASIC miner” in Google or Ebay will find the equipment needed to mine Electroneum. You will need to be reasonably technical to achieve this! Having a stable network is absolutely key to both delivering mass adoption and to ensure we have a great relationship with the great exchanges that we’re already listed with, and to encourage more of the larger exchanges to see Electroneum as a coin that they want on board. How will I mine Electroneum after this update? If you are a mobile miner – nothing changes. If you are a GPU or ASIC miner then you’ll need to connect to an Electroneum pool but it is important to note that you will need to change your ALGORITHM. You MUST use the algorithm “Cryptonight” and NOT “Electroneum” or “CryptonightV7”. This will ensure your device works after the update. We will communicate this to all pools, but if you are a member of a mining pool – PLEASE LET THE ADMINS KNOW ABOUT THIS CRITICAL UPDATE. They must update their pool node by block 324500, which is only around 36 hours away. Instant Payment vendor API is open for BETA applicants Instant Cryptocurrency Payments via smart phone has always been a critical part of what Electroneum required to achieve mass market adoption. It’s never been done, but 9 short months after our ICO we are excited to announce that we have opened to the doors to vendors who would like to accept payment via Electroneum. The application is to be part of the BETA rollout of instant payment, but will operate on the live blockchain with real ETN! If you run a business or know someone who does – why not recommend they apply to accept ETN. The Press and Marketing opportunities for the first, in any sector, to accept cryptocurrency are huge! Be part of the instant payment API BETA program by completing this form: https://docs.google.com/…/1FAIpQLSfKTwWT7W4ltmApZO…/viewform How can ETN change the world? Instant payment does far more than allow people to pay for their coffee with crypto instead of their VISA card. If you’d like to know more about Electroneum’s future I suggest you read a fantastic article that describes its coming role in the world, by fellow director Chris Gorman OBE (Officer of the British Empire – awarded by the Queen of England!): https://www.linkedin.com/…/how-cryptocurrency-enable-financ… Electroneum has one of the largest of all cryptocurrency communities and it is made up of passionate and amazing people. With your support and world changing things we have coming out over the next few weeks, we can use this update to make our blockchain foundation secure and lead the world in mobile cryptocurrency. I'm sure you agree that we've been through some challenging times which our team have always dealt with and learned from. The strength and support from our community and many of our goals becoming a reality combined with this blockchain update will give us the perfect foundation to deliver the Electroneum vision that we all share. Thanks for taking the time to read this long message. Have a great day everyone, Richard Ells Founder, Electroneum.com
Hello, I’ve been trying to decide on a FPGA development board, and have only been able to find posts and Reddit threads from 4-5 years ago. So I wanted to start a new thread and ask about the best “mid-range” FGPA development board in 2018. (Price range $100-$300.) I started with this Quora answer about FPGA boards, from 2013. The Altera DE1 sounded good. Then I looked through the Terasic DE boards. Then I found this Reddit thread from 2014, asking about the DE1-SoC vs the Cyclone V GX Starter Kit: https://www.reddit.com/FPGA/comments/1xsk6w/cyclone_v_gx_starter_kit_vs_de1soc_board/ (I was also leaning towards the DE1-SoC.) Anyway, I thought I better ask here, because there are probably some new things to be aware of in 2018. I’m completely new to FPGAs and VHDL, but I have experience with electronics/microcontrollers/programming. My goal is to start with some basic soft-core processors. I want to get some C / Rust programs compiling and running on my own CPU designs. I also want to play around with different instruction sets, and maybe start experimenting with asynchronous circuits (e.g. clock-less CPUs) Also I don’t know if this is possible, but I’d like to experiment with ternary computing, or work with analog signals instead of purely digital logic. EDIT: I just realized that you would call those FPAAs, i.e. “analog” instead of “gate”. Would be cool if there was a dev board that also had an FPAA, but no problem if not. EDIT 2: I also realized why "analog signals on an FPGA" doesn't make any sense, because of how LUTs work. They emulate boolean logic with a lookup table, and the table can only store 0s and 1s. So there's no way to emulate a transistor in an intermediate state. I'll just have play around with some transistors on a breadboard. UPDATE: I've put together a table with some of the best options:
A very simple FPGA development board that plugs into a Raspberry Pi, so you have a "backup" hard-core CPU that can control networking, etc. Supports a huge range of pmod accessories. You can write a program/circuit so that the Raspberry Pi CPU and the FPGA work together, similar to a SoC. Proprietary bitstream is fully reverse engineered and supported by Project IceStorm, and there is an open-source toolchain that can compile your hardware design to bitstream. Has everything you need to start experimenting with FPGAs.
Xilinx Zynq 7-Series SoC - ARM Cortex-A9 processor, and Artix-7 FPGA. 125 IO pins. 1GB DDR2 RAM. Texas Instruments WiLink 8 wireless module for 802.11n Wi-Fi and Bluetooth 4.1. No LEDs or buttons, but easy to wire up your own on a breadboard. If you want to use a baseboard, you'll need a snickerdoodle black ($195) with the pins in the "down" orientation. (E.g. The "breakyBreaky breakout board" ($49) or piSmasher SBC ($195)). The snickerdoodle one only comes with pins in the "up" orientation and doesn't support any baseboards. But you can still plug the jumpers into the pins and wire up things on a breadboard.
Has one of the latest Xilinx SoCs. 2 GB (512M x32) LPDDR4 Memory. Wi-Fi / Bluetooth. Mini DisplayPort. 1x USB 3.0 type Micro-B, 2x USB 3.0 Type A. Audio I/O. Four user-controllable LEDs. No buttons and limited LEDs, but easy to wire up your own on a breadboard
Xilinx Zynq 7000 SoC (ARM Cortex-A9, 7-series FPGA.) 1 GB DDR3 RAM. A few switches, push buttons, and LEDs. USB and Ethernet. Audio in/out ports. HDMI source + sink with CEC. 8 Total Processor I/O, 40 Total FPGA I/O. Also a faster version for $299 (Zybo Z7-20).
Same as DE10-Standard, but not as many peripherals, buttons, LEDs, etc.
icoBoard ($100). (Buy it here.) The icoBoard plugs into a Raspberry Pi, so it's similar to having a SoC. The iCE40-HX8K chip comes with 7,680 LUTs (logic elements.) This means that after you learn the basics and create some simple circuits, you'll also have enough logic elements to run the VexRiscv soft-core CPU (the lightweight Murax SoC.) The icoBoard also supports a huge range of pluggable pmod accessories:
numato Mimas A7 ($149). An excellent development board with a Xilinx Artix 7 FPGA, so you can play with a bigger / faster FPGA and run a full RISC-V soft-core with all the options enabled, and a much higher clock speed. (The iCE40 FPGAs are a bit slow and small.)
I ordered a iCE40-HX8K Breakout Board to try out the IceStorm open source tooling. (I would have ordered an icoBoard if I had found it earlier.) I also bought a numato Mimas A7 so that I could experiment with the Artix 7 FPGA and Xilinx software (Vivado Design Suite.)
What can I do with an FPGA? / How many LUTs do I need?
VexRiscv is "A FPGA friendly 32 bit RISC-V CPU implementation." This is a RISC-V implementation written in SpinalHDL. VexRiscv has a lot of plugin and configuration options. The Murax SoC is a very light SoC that can run on an iCE40-HX8k (but probably not the 1k FPGA that only has 1,280 LUTs). The Briey SoC only runs on Xilinx or Altera FPGAs.
Technical Cryptonight Discussion: What about low-latency RAM (RLDRAM 3, QDR-IV, or HMC) + ASICs?
The Cryptonight algorithm is described as ASIC resistant, in particular because of one feature:
A megabyte of internal memory is almost unacceptable for the modern ASICs.
EDIT: Each instance of Cryptonight requires 2MB of RAM. Therefore, any Cryptonight multi-processor is required to have 2MB per instance. Since CPUs are incredibly well loaded with RAM (ie: 32MB L3 on Threadripper, 16 L3 on Ryzen, and plenty of L2+L3 on Skylake Servers), it seems unlikely that ASICs would be able to compete well vs CPUs. In fact, a large number of people seem to be incredibly confident in Cryptonight's ASIC resistance. And indeed, anyone who knows how standard DDR4 works knows that DDR4 is unacceptable for Cryptonight. GDDR5 similarly doesn't look like a very good technology for Cryptonight, focusing on high-bandwidth instead of latency. Which suggests only an ASIC RAM would be able to handle the 2MB that Cryptonight uses. Solid argument, but it seems to be missing a critical point of analysis from my eyes. What about "exotic" RAM, like RLDRAM3 ?? Or even QDR-IV?
QDR-IV SRAM is absurdly expensive. However, its a good example of "exotic RAM" that is available on the marketplace. I'm focusing on it however because QDR-IV is really simple to describe. QDR-IV costs roughly $290 for 16Mbit x 18 bits. It is true Static-RAM. 18-bits are for 8-bits per byte + 1 parity bit, because QDR-IV is usually designed for high-speed routers. QDR-IV has none of the speed or latency issues with DDR4 RAM. There are no "banks", there are no "refreshes", there are no "obliterate the data as you load into sense amplifiers". There's no "auto-charge" as you load the data from the sense-amps back into the capacitors. Anything that could have caused latency issues is gone. QDR-IV is about as fast as you can get latency-wise. Every clock cycle, you specify an address, and QDR-IV will generate a response every clock cycle. In fact, QDR means "quad data rate" as the SRAM generates 2-reads and 2-writes per clock cycle. There is a slight amount of latency: 8-clock cycles for reads (7.5nanoseconds), and 5-clock cycles for writes (4.6nanoseconds). For those keeping track at home: AMD Zen's L3 cache has a latency of 40 clocks: aka 10nanoseconds at 4GHz Basically, QDR-IV BEATS the L3 latency of modern CPUs. And we haven't even begun to talk software or ASIC optimizations yet.
CPU inefficiencies for Cryptonight
Now, if that weren't bad enough... CPUs have a few problems with the Cryptonight algorithm.
AMD Zen and Intel Skylake CPUs transfer from L3 -> L2 -> L1 cache. Each of these transfers are in 64-byte chunks. Cryptonight only uses 16 of these bytes. This means that 75% of L3 cache bandwidth is wasted on 48-bytes that would never be used per inner-loop of Cryptonight. An ASIC would transfer only 16-bytes at a time, instantly increasing the RAM's speed by 4-fold.
AES-NI instructions on Ryzen / Threadripper can only be done one-per-core. This means a 16-core Threadripper can at most perform 16 AES encryptions per clock tick. An ASIC can perform as many as you'd like, up to the speed of the RAM.
CPUs waste a ton of energy: there's L1 and L2 caches which do NOTHING in Cryptonight. There are floating-point units, memory controllers, and more. An ASIC which strips things out to only the bare necessities (basically: AES for Cryptonight core) would be way more power efficient, even at ancient 65nm or 90nm designs.
QDR-IV and RLDRAM3 still have latency involved. Assuming 8-clocks of latency, the naive access pattern would be:
This isn't very efficient: the RAM sits around waiting. Even with "latency reduced" RAM, you can see that the RAM still isn't doing very much. In fact, this is why people thought Cryptonight was safe against ASICs. But what if we instead ran four instances in parallel? That way, there is always data flowing.
Cryptonight #1 Read
Cryptonight #2 Read
Cryptonight #3 Read
Cryptonight #4 Read
Cryptonight #1 Write
Cryptonight #2 Write
Cryptonight #3 Write
Cryptonight #4 Write
Cryptonight #1 Read #2
Cryptonight #2 Read #2
Cryptonight #3 Read #2
Cryptonight #4 Read #2
Cryptonight #1 Write #2
Cryptonight #2 Write #2
Cryptonight #3 Write #2
Cryptonight #4 Write #2
Notice: we're doing 4x the Cryptonight in the same amount of time. Now imagine if the stalls were COMPLETELY gone. DDR4 CANNOT do this. And that's why most people thought ASICs were impossible for Cryptonight. Unfortunately, RLDRAM3 and QDR-IV can accomplish this kind of pipelining. In fact, that's what they were designed for.
As good as QDR-IV RAM is, its way too expensive. RLDRAM3 is almost as fast, but is way more complicated to use and describe. Due to the lower cost of RLDRAM3 however, I'd assume any ASIC for CryptoNight would use RLDRAM3 instead of the simpler QDR-IV. RLDRAM3 32Mbit x36 bits costs $180 at quantities == 1, and would support up to 64-Parallel Cryptonight instances (In contrast, a $800 AMD 1950x Threadripper supports 16 at the best). Such a design would basically operate at the maximum speed of RLDRAM3. In the case of x36-bit bus and 2133MT/s, we're talking about 2133 / (Burst Length4 x 4 read/writes x 524288 inner loop) == 254 Full Cryptonight Hashes per Second. 254 Hashes per second sounds low, and it is. But we're talking about literally a two-chip design here. 1-chip for RAM, 1-chip for the ASIC/AES stuff. Such a design would consume no more than 5 Watts. If you were to replicate the ~5W design 60-times, you'd get 15240 Hash/second at 300 Watts.
Depending on cost calculations, going cheaper and "making more" might be a better idea. RLDRAM2 is widely available at only $32 per chip at 800 MT/s. Such a design would theoretically support 800 / 4x4x524288 == 95 Cryptonight Hashes per second. The scary part: The RLDRAM2 chip there only uses 1W of power. Together, you get 5 Watts again as a reasonable power-estimate. x60 would be 5700 Hashes/second at 300 Watts. Here's Micron's whitepaper on RLDRAM2: https://www.micron.com/~/media/documents/products/technical-note/dram/tn4902.pdf . RLDRAM3 is the same but denser, faster, and more power efficient.
Hybrid Cube Memory
Hybrid Cube Memory is "stacked RAM" designed for low latency. As far as I can tell, Hybrid Cube memory allows an insane amount of parallelism and pipelining. It'd be the future of an ASIC Cryptonight design. The existence of Hybrid Cube Memory is more about "Generation 2" or later. In effect, it demonstrates that future designs can be lower-power and give higher-speed.
The overall board design would be the ASIC, which would be a simple pipelined AES ASIC that talks with RLDRAM3 ($180) or RLDRAM2 ($30). Its hard for me to estimate an ASIC's cost without the right tools or design. But a multi-project wafer like MOSIS offers "cheap" access to 14nm and 22nm nodes. Rumor is that this is roughly $100k per run for ~40 dies, suitable for research-and-development. Mass production would require further investments, but mass production at the ~65nm node is rumored to be in the single-digit $$millions or maybe even just 6-figures or so. So realistically speaking: it'd take ~$10 Million investment + a talented engineer (or team of engineers) who are familiar with RLDRAM3, PCIe 3.0, ASIC design, AES, and Cryptonight to build an ASIC.
Current CPUs waste 75% of L3 bandwidth because they transfer 64-bytes per cache-line, but only use 16-bytes per inner-loop of CryptoNight.
Low-latency RAM exists for only $200 for ~128MB (aka: 64-parallel instances of 2MB Cryptonight). Such RAM has an estimated speed of 254 Hash/second (RLDRAM 3) or 95 Hash/second (Cheaper and older RLDRAM 2)
ASICs are therefore not going to be capital friendly: between the higher costs, the ASIC investment, and the literally millions of dollars needed for mass production, this would be a project that costs a lot more than a CPU per-unit per hash/sec.
HOWEVER, a Cryptonight ASIC seems possible. Furthermore, such a design would be grossly more power-efficient than any CPU. Though the capital investment is high, the rewards of mass-production and scalability are also high. Data-centers are power-limited, so any Cryptonight ASIC would be orders of magnitude lower-power than a CPU / GPU.
EDIT: Greater discussion throughout today has led me to napkin-math an FPGA + RLDRAM3 option. I estimated roughly ~$5000 (+/- 30%, its a very crude estimate) for a machine that performs ~3500 Hashes / second, on an unknown number of Watts (Maybe 75Watts?). $2000 FPGA, $2400 RLDRAM3, $600 on PCBs, misc chips, assembly, etc. etc. A more serious effort may use Hybrid Cube Memory to achieve much higher FPGA-based Hashrates. My current guess is that this is an overestimate on the cost, so -30% if you can achieve some bulk discounts + optimize the hypothetical design and manage to accomplish the design on cheaper hardware.
Cryptocurrency just like any other technological development has given birth to many side industries and trends like ICO, white paper writing, and mining etc… just the cryptocurrency itself rises, falls and changes to adapt real life conditions, so does its side industries and trends. Today we are going to be focusing on mining. How it has risen, fell and adapted through the journey of cryptocurrency till date. Without going into details crypto mining is the process by which new blocks are validated and added to the blockchain. It first took to main stream in January 2009 when the mysterious Satoshi Nakamoto launched the bitcoin white paper within which he/she/they proposed the first mining consensus mechanism called proof of work (Pow). The PoW consensus mechanism required that one should spend a certain amount of computational power to solve a cryptographic problem (nounce) in other to have the have the right to pack/verify the next block on the blockchain. In this mechanism, the more computational power one possesses the more rights they have over the packing of the next block. The quest for faster hardware has seen significant changes in the types of hard ware dominating the PoW mining community. Back in 2009 when bitcoin first started a normal pc and its processing power worked just fine. In fact a pc with an i7 Intel processor could mine up to 50btc per day but back then it almost nothing since btc was only some few cents. When the difficulty of the network became significantly high, simple computer processing units could not match the competitiveness and so miners settled for something more powerful, the high end graphic processors (GPU). This is when the era of rigs began It was in 2010. People would combine GPUs together in mining rigs on a mother board usually in order of 6 per rig some miners operated farms containing many of these rigs. Of course with greater power came greater network difficulty and so the search for faster hard ware let to implementation of Field Programmable Gate Arrays (FPGA) in June 2012. A further search for faster, less consuming and cheaper hard ware let us to where we are today. In the year 2013, Application Specific Integrated Circuits (ASIC) miners were introduced. One ASIC miner processes 1500H/s which is 100 times processing power of CPU and GPU. But all this speed and efficiency achievements brought about another problem one which touches the core of cryptocurrency itself. The idea of decentralization was gradually fading away as wealthy and big companies are the once who could afford and build the miners therefore centralizing mining around the rich, there was a called for ASIC resistant consensus mechanism. A movement for ASIC resistant PoW algorithms began the idea is to make ASIC mining impossible or at least make it such that using ASIC doesn’t give a miner any additional advantage as to using CPU . In 2013 the MONERO the famous privacy coin proposed CryptoNight an ASIC resistant PoW consensus at least that is how they intended it to be. But things have proven much more difficult in practice than they had anticipated as ASIC producers keep matching up to every barrier put in place the PoW designers at a rate faster than it takes to build these barriers. MONERO for example has to fork every now and then in other to keep the CryptoNight ASIC resistant a trick which is still not working as reported by their CEO “We [also] saw that this was very unsustainable. … It takes a lot to keep [hard forking] again and again for one. For two, it may decentralize mining but it centralizes in another area. It centralizes on the developers because now there’s a lot of trust in developers to keep hard forking.” Another PoW ASIC resistance algorithm is the RamdonX and there are many others but could quickly imagine that the barriers to ASIC mining in these ASIC resistance algorithm would eventually be broken by the ASIC miners and so a total shift from PoW mining to other consensus mechanisms which are ASIC resistance from core were proposed some of which are in use today. Entered the Proof of Stake (PoS) consensus mechanism. PoS was first introduced in 2013 by the PeerCoin team. Here, a validator’s right to mine is proportionate to his/heit economic value in the network simple put the more amounts of coins you have the more mining rights you get. Apart from PeerCoin, NEO and LISK also use POS and soon to follow is EThereum. There are different variations to PoS including but not limited to delegated proof of stake DPoS, masternode proof of stake MPoS each of which seek to improve on something in the POS. This is a very good ASIC resistance consensus mechanism but it still doesn’t solves the centralization problem as the rich always have the power to more coins and have more mining rights plus it is also expensive to start. And then we have gotten many other proposals to combat this among which are Proof of Weight (PoW) and Proof of Capacity (PoC). We take more interest in PoC it is the latest and gives the best solution to all our mining challenges consensus as of now. Proof of Capacity was first was described 2013 in the Proofs of Space paper by Dziembowski, Faust, Kolmogorov and Pietrzak and it is now being used in Burst. The main factor that separates all the mining mechanisms is the resource used. These resources which miners spend in other to have mining rights is a measure of ensuring that one has expense a none-trivial amount of effort in making a statement. The resource being spent in PoC is disk space. This is less expensive since many people already have some unused space lying around and space is a cheap resource in the field of tech. it has no discrimination over topography… it really solves lots of centralized problems present in all most other consensus. If the future is now then one could say the future of crypto mining is PoC.
I would like to warmly welcome everyone to waltonchain This is an updated, extended community-written post and I will try to update it regularly over time.
Please respect our rules (see sidebar) and feel free to comment, contribute and ask questions. Don’t forget to subscribe to the subreddit for any news on Waltonchain!
What is Waltonchain?
The Waltonchain Foundation is building a cross-industry, cross-data sharing platform by integrating Blockchain with the Internet of Things through self-developed RFID Chips with intellectual property rights. The in-house developed Waltonchain RFID chips integrate a proprietary, genuine random number generator and an asymmetric encryption logic and hardware signature circuit, all of which are patent-protected. The combination of self-developed RFID chips and the Waltonchain blockchain will ultimately achieve the interconnection of all things and create a genuine, believable, traceable businessmodel with totally shared data and transparent information. Waltonchain will unfold a new era of the Value Internet of Things (VIoT).
The Waltonchain team has formulated a 4-phase development plan, starting from infrastructure platform establishment to gradually incorporating retail, logistics and product manufacturing, and to finally achieving the full coverage of the business ecosystem.
As for the phase 1.0 of the project, the team has developed the clothing system integration scheme based on RFID. The application scenarios at phase 1.0 will establish Golden demonstration template At phase 2.0, our RFID beacon chip will be massproduced and can be used in clothing, B2C retail and logistics. At phase 3.0, manufacturers will achieve traceable customization of intelligent packaging. At the project phase 4.0, with the upgrading and iteration of assets information collection hardware and improvement of blockchain data structure, all assets can be registered in Waltonchain in the future.
Do Sanghyuk (都相爀) – Initiator in Korea Korean, Vice Chairman of the China - Korea Cultural Exchange Development Committee, Director of the Korea Standard Products Association, Chairman of Seongnam Branch of the Korea Small and Medium Enterprises Committee, Chairman of Korea NC Technology Co., Ltd., Senior Reporter of IT TODAY News, Senior Reporter of NEWS PAPER Economic Department, Director of ET NEWS.
Xu Fangcheng (许芳呈) – Initiator in China Chinese, majored in Business Management, former Director for Supply Chain Management of Septwolves Group Ltd., has rich practical experience in supply chain management and purchasing process management. Currently, he is the Director of Shenzhen Silicon, the Director of Xiamen Silicon and the Board Chairman of Quanzhou Silicon. He is also one of our Angel investors.
Kim Suk ki (金锡基) Korean, South Koreas electronics industry leader, Doctor of Engineering (graduated from the University of Minnesota), Professor of Korea University, previously worked at Bell Labs and Honeywell USA, served as vice president of Samsung Electronics, senior expert in integrated circuit design field, IEEE Senior Member, Vice President of the Korea Institute of Electrical Engineers, Chairman of the Korea Semiconductor Industry Association. Has published more than 250 academic papers with more than 60 patents.
Zhu Yanping (朱延平) Taiwanese, China, Doctor of Engineering (graduated from National Cheng Kung University), Chairman of the Taiwan Cloud Services Association, Director of Information Management Department of National Chung Hsing University. Has won the Youth Invention Award by Taiwan Ministry of Education and Taiwan Top Ten Information Talent Award. Has deeply studied blockchain applications over the years and led a block chain technology team to develop systems for health big data and agricultural traceability projects.
Mo Bing (莫冰) Chinese, Doctor of Engineering (graduated from Harbin Institute of Technology), Research Professor of Korea University, Distinguished Fellow of Sun Yat - sen University, Internet of Things expert, integrated circuit expert, Senior Member of Chinese Society of Micro-Nano Technology, IEEE Member. Has published more than 20 papers and applied for 18 invention patents. Began his research of BitCoin in 2013, one of the earliest users of btc 38.com and Korea korbit. Served as Technical Director of Korea University to cooperate with Samsung Group to complete the project Multi sensor data interaction and fusion based on peer to peer network. Committed to the integration of block chain technology and Internet of Things to create a real commercialized public chain.
Wei Songjie (魏松杰) Chinese, Doctor of Engineering (graduated from the University of Delaware), Associate Professor of Nanjing University of Science and Technology, Core Member and Master Supervisor of Network Space Security Engineering Research Institute, Block Chain Technology expert in the field of computer network protocol and application, network and information security. Has published more than 20 papers and applied for 7 invention patents. Previously worked at Google, Qualcomm, Bloomberg and many other high-tech companies in the United States, served as R D engineer and technical expert; has a wealth of experience in computer system design, product development and project management.
Shan Liang (单良) Graduated from KOREATECH (Korea University of Technology and Education) Mechanical Engineering Department, Venture Capital PhD, GM of Waltonchain Technology Co., Ltd. (Korea), Director of Korea Sungkyun Technology Co., Ltd., Chinese Market Manager of the heating component manufacturer NHTECH, a subsidiary of Samsung SDI, economic group leader of the Friendship Association of Chinese Doctoral Students in Korea, one of the earliest users of Korbit, senior digital money player.
Chen Zhangrong (陈樟荣) Chinese, graduated in Business Management, received a BBA degree in Armstrong University in the United States, President of TIANYU INTERNATIONAL GROUP LIMITED, leader of Chinese clothing accessories industry, Chinas well-known business mentor, guest of the CCTV2 Win in China show in 2008. Researcher in the field of thinking training for Practical Business Intelligence e-commerce and MONEYYOU course, expert on success for Profit Model course. Began to contact Bitcoin in 2013 with a strong interest and in-depth study of digital money and decentralized management thinking. Has a wealth of practical experience in the business management, market research, channel construction, business cooperation and business model.
Lin Herui (林和瑞) Chinese, Dean of Xiamen Zhongchuan Internet of Things Industry Research Institute, Chairman of Xiamen Citylink Technology Co., Ltd., Chairman of Xiamen IOT. He successively served as Nokia RD Manager and Product Manager, Microsoft Hardware Department Supply Chain Director. In 2014, started to set up a number of IoT enterprises and laid out the industrial chain of the Internet of Things. The products and services developed under his guidance are very popular. Assisted the government in carrying out industrial and policy research and participated in planning of multiple government projects of smart cities, IoT towns and project reviews.
Ma Xingyi (马兴毅) Chinese, China Scholarship Council (CSC) special student, Doctor of Engineering of Korea University, Research Professor of Fusion Chemical Systems Institute of Korea University, Korea Sungkyun Technology Co., Ltd. CEO, Member of Korea Industry Association, Associate Member of the Royal Society of Chemistry, has published his research results in the worlds top journal Nature Communications and participated in the preparation of a series of teaching materials for Internet of Things engineering titled Introduction to the Internet of Things. His current research direction covers cross-disciplines that combine blockchain technology with intelligent medical technology.
Zhao Haiming (赵海明) Chinese, Doctor of Chemical Conductive Polymer of Sungkyunkwan University, core member of Korea BK21th conductive polymer project, researcher of Korea Gyeonggi Institute of Sensor, researcher of Korea ECO NCTech Co., Ltd., Vice President of the Chinese Chamber of Commerce, Director of Korea Sungkyun Technology Co., Ltd. He has been engaged in transfer of semiconductor, sensor and other technologies in South Korea. He is an early participant of the digital currency market.
Liu Cai (刘才) Chinese, Master of Engineering, has 12 years of experience in design and verification of VLSI and a wealth of practical project experience in RFID chip design process, SOC chip architecture, digital-analog hybrid circuit design, including algorithm design, RTL design, simulation verification, FPGA prototype verification, DC synthesis, backend PR, package testing, etc. Has led a team to complete the development of a variety of navigation and positioning baseband chips and communication baseband chips, finished a series of AES, DES and other encryption module designs, won the first prize of GNSS and LBS Association of China for scientific and technological progress. Finally, he is an expert in the consensus mechanism principle of blockchain and the related asymmetric encryption algorithm.
Yang Feng (杨锋) Chinese, Master of Engineering, worked at ZTE. Artificial intelligence expert, integrated circuit expert. Has 12 years of experience in VLSI research and development, architecture design and verification and 5 years of research experience in artificial intelligence and the genetic algorithm. Has won the Shenzhen Science and Technology Innovation Award. Has done an in-depth research on the principle and realization of the RFID technology, the underlying infrastructure of blockchain, smart contracts and the consensus mechanism algorithm.
Guo Jianping (郭建平) Chinese, Doctor of Engineering (graduated from the Chinese University of Hong Kong), Associate Professor of the Hundred Talents Program of Sun Yat-sen University, academic advisor of masters degree students, IEEE senior member, integrated circuit expert. Has published more than 40 international journal conference papers in the field of IC design and applied for 16 patents in China.
Huang Ruimin (黄锐敏) Chinese, Doctor of Engineering (graduated from the University of Freiburg, Germany), academic advisor of masters degree students, lecturer of the Department of Electronics of Huaqiao University, integrated circuit expert. Mainly explores digital signal processing circuit and system implementation and works on digital signal processing technology long-term research and development.
Guo Rongxin (郭荣新) Chinese, Master of Engineering, Deputy Director of the Communication Technology Research Center of Huaqiao University. Has more than 10 years of experience in design and development of hardware and software for embedded systems, works on the long-term research and development of RFID and blockchain technology in the field of Internet of Things.
Dai Minhua (戴闽华) Chinese, graduated in Business Management, received a BBA degree from Armstrong University, senior financial expert, served as Vice President and CFO of Tanyu International Group Co., Ltd. Has 13 years of financial work experience, has a wealth of experience in developing and implementing enterprise strategy and business plans, as well as achieving business management objectives and development goals.
Liu Dongxin (刘东欣) Chinese, received an MBA from China Europe International Business School, Visiting Scholar of Kellogg School of Management at Northwestern University, strategic management consulting expert, investment and financing expert. His current research interest lies in the impact of the blockchain technology on the financial sector.
Song Guoping (宋国平) Qiu Jun (邱俊) Yan Xiaoqian (严小铅) Lin Jingwei (林敬伟) He Honglian (何红连)
Ko Sang Tae (高尚台) Liu Xiaowei (刘晓为) Su Yan (苏岩) Zhang Yan (张岩) Ma Pingping (马萍萍) Peng Xiande (彭先德) Fu Ke (傅克) Xiao Guangjian (肖光坚) Li Xiong (李雄)
https://seekingalpha.com/article/4152240-amds-growing-cpu-advantage-intel?page=1 AMD's Growing CPU Advantage Over Intel Mar. 1.18 | About: Advanced Micro (AMD) Raymond Caron, Ph.D. Tech, solar, natural resources, energy (315 followers) Summary AMD's past and economic hazards. AMD's Current market conditions. AMD Zen CPU advantage over Intel. AMD is primarily a CPU fabrication company with much experience and a great history in that respect. They hold patents for 64-bit processing, as well as ARM based processing patents, and GPU architecture patents. AMD built a name for itself in the mid-to-late 90’s when they introduced the K-series CPU’s to good reviews followed by the Athlon series in ‘99. AMD was profitable, they bought the companies NexGen, Alchemy Semiconductor, and ATI. Past Economic Hazards If AMD has such a great history, then what happened? Before I go over the technical advantage that AMD has over Intel, it’s worth looking to see how AMD failed in the past, and to see if those hazards still present a risk to AMD. As for investment purposes we’re more interested in AMD’s turning a profit. AMD suffered from intermittent CPU fabrication problems, and was also the victim of sustained anti-competitive behaviour from Intel who interfered with AMD’s attempts to sell its CPU’s to the market through Sony, Hitachi, Toshiba, Fujitsu, NEC, Dell, Gateway, HP, Acer, and Lenovo. Intel was investigated and/or fined by multiple countries including Japan, Korea, USA, and EU. These hazard needs to be examined to see if history will repeat itself. There have been some rather large changes in the market since then. 1) The EU has shown they are not averse to leveling large fines, and Intel is still fighting the guilty verdict from the last EU fine levied against them; they’ve already lost one appeal. It’s conceivable to expect that the EU, and other countries, would prosecute Intel again. This is compounded by the recent security problems with Intel CPU’s and the fact that Intel sold these CPU’s under false advertising as secure when Intel knew they were not. Here are some of the largest fines dished out by the EU 2) The Internet has evolved from Web 1.0 to 2.0. Consumers are increasing their online presence each year. This reduces the clout that Intel can wield over the market as AMD can more easily sell to consumers through smaller Internet based companies. 3) Traditional distributors (HP, Dell, Lenovo, etc.) are struggling. All of these companies have had recent issues with declining revenue due to Internet competition, and ARM competition. These companies are struggling for sales and this reduces the clout that Intel has over them, as Intel is no longer able to ensure their future. It no longer pays to be in the club. These points are summarized in the graph below, from Statista, which shows “ODM Direct” sales and “other sales” increasing their market share from 2009 to Q3 2017. 4) AMD spun off Global Foundries as a separate company. AMD has a fabrication agreement with Global Foundries, but is also free to fabricate at another foundry such as TSMC, where AMD has recently announced they will be printing Vega at 7nm. 5) Global Foundries developed the capability to fabricate at 16nm, 14nm, and 12nm alongside Samsung, and IBM, and bought the process from IBM to fabricate at 7nm. These three companies have been cooperating to develop new fabrication nodes. 6) The computer market has grown much larger since the mid-90’s – 2006 when AMD last had a significant tangible advantage over Intel, as computer sales rose steadily until 2011 before starting a slow decline, see Statista graph below. The decline corresponds directly to the loss of competition in the marketplace between AMD and Intel, when AMD released the Bulldozer CPU in 2011. Tablets also became available starting in 2010 and contributed to the fall in computer sales which started falling in 2012. It’s important to note that computer shipments did not fall in 2017, they remained static, and AMD’s GPU market share rose in Q4 2017 at the expense of Nvidia and Intel. 7) In terms of fabrication, AMD has access to 7nm on Global Foundries as well as through TSMC. It’s unlikely that AMD will experience CPU fabrication problems in the future. This is something of a reversal of fortunes as Intel is now experiencing issues with its 10nm fabrication facilities which are behind schedule by more than 2 years, and maybe longer. It would be costly for Intel to use another foundry to print their CPU’s due to the overhead that their current foundries have on their bottom line. If Intel is unable to get the 10nm process working, they’re going to have difficulty competing with AMD. AMD: Current market conditions In 2011 AMD released its Bulldozer line of CPU’s to poor reviews and was relegated to selling on the discount market where sales margins are low. Since that time AMD’s profits have been largely determined by the performance of its GPU and Semi-Custom business. Analysts have become accustomed to looking at AMD’s revenue from a GPU perspective, which isn’t currently being seen in a positive light due to the relation between AMD GPU’s and cryptocurrency mining. The market views cryptocurrency as further risk to AMD. When Bitcoin was introduced it was also mined with GPU’s. When the currency switched to ASIC circuits (a basic inexpensive and simple circuit) for increased profitability (ASIC’s are cheaper because they’re simple), the GPU’s purchased for mining were resold on the market and ended up competing with and hurting new AMD GPU sales. There is also perceived risk to AMD from Nvidia which has favorable reviews for its Pascal GPU offerings. While AMD has been selling GPU’s they haven’t increased GPU supply due to cryptocurrency demand, while Nvidia has. This resulted in a very high cost for AMD GPU’s relative to Nvidia’s. There are strategic reasons for AMD’s current position: 1) While the AMD GPU’s are profitable and greatly desired for cryptocurrency mining, AMD’s market access is through 3rd party resellers whom enjoy the revenue from marked-up GPU sales. AMD most likely makes lower margins on GPU sales relative to the Zen CPU sales due to higher fabrication costs associated with the fabrication of larger size dies and the corresponding lower yield. For reference I’ve included the size of AMD’s and Nvidia’s GPU’s as well as AMD’s Ryzen CPU and Intel’s Coffee lake 8th generation CPU. This suggests that if AMD had to pick and choose between products, they’d focus on Zen due higher yield and revenue from sales and an increase in margin. 2) If AMD maintained historical levels of GPU production in the face of cryptocurrency demand, while increasing production for Zen products, they would maximize potential income for highest margin products (EPYC), while reducing future vulnerability to second-hand GPU sales being resold on the market. 3) AMD was burned in the past from second hand GPU’s and want to avoid repeating that experience. AMD stated several times that the cryptocurrency boom was not factored into forward looking statements, meaning they haven’t produced more GPU’s to expect more GPU sales. In contrast, Nvidia increased its production of GPU’s due to cryptocurrency demand, as AMD did in the past. Since their Pascal GPU has entered its 2nd year on the market and is capable of running video games for years to come (1080p and 4k gaming), Nvidia will be entering a position where they will be competing directly with older GPU’s used for mining, that are as capable as the cards Nvidia is currently selling. Second-hand GPU’s from mining are known to function very well, with only a need to replace the fan. This is because semiconductors work best in a steady state, as opposed to being turned on and off, so it will endure less wear when used 24/7. The market is also pessimistic regarding AMD’s P/E ratio. The market is accustomed to evaluating stocks using the P/E ratio. This statistical test is not actually accurate in evaluating new companies, or companies going into or coming out of bankruptcy. It is more accurate in evaluating companies that have a consistent business operating trend over time. “Similarly, a company with very low earnings now may command a very high P/E ratio even though it isn’t necessarily overvalued. The company may have just IPO’d and growth expectations are very high, or expectations remain high since the company dominates the technology in its space.” P/E Ratio: Problems With The P/E I regard the pessimism surrounding AMD stock due to GPU’s and past history as a positive trait, because the threat is minor. While AMD is experiencing competitive problems with its GPU’s in gaming AMD holds an advantage in Blockchain processing which stands to be a larger and more lucrative market. I also believe that AMD’s progress with Zen, particularly with EPYC and the recent Meltdown related security and performance issues with all Intel CPU offerings far outweigh any GPU turbulence. This turns the pessimism surrounding AMD regarding its GPU’s into a stock benefit. 1) A pessimistic group prevents the stock from becoming a bubble. -It provides a counter argument against hype relating to product launches that are not proven by earnings. Which is unfortunately a historical trend for AMD as they have had difficulty selling server CPU’s, and consumer CPU’s in the past due to market interference by Intel. 2) It creates predictable daily, weekly, monthly, quarterly fluctuations in the stock price that can be used, to generate income. 3) Due to recent product launches and market conditions (Zen architecture advantage, 12nm node launching, Meltdown performance flaw affecting all Intel CPU’s, Intel’s problems with 10nm) and the fact that AMD is once again selling a competitive product, AMD is making more money each quarter. Therefore the base price of AMD’s stock will rise with earnings, as we’re seeing. This is also a form of investment security, where perceived losses are returned over time, due to a stock that is in a long-term upward trajectory due to new products reaching a responsive market. 4) AMD remains a cheap stock. While it’s volatile it’s stuck in a long-term upward trend due to market conditions and new product launches. An investor can buy more stock (with a limited budget) to maximize earnings. This is advantage also means that the stock is more easily manipulated, as seen during the Q3 2017 ER. 5) The pessimism is unfounded. The cryptocurrency craze hasn’t died, it increased – fell – and recovered. The second hand market did not see an influx of mining GPU’s as mining remains profitable. 6) Blockchain is an emerging market, that will eclipse the gaming market in size due to the wide breath of applications across various industries. Vega is a highly desired product for Blockchain applications as AMD has retained a processing and performance advantage over Nvidia. There are more and rapidly growing applications for Blockchain every day, all (or most) of which will require GPU’s. For instance Microsoft, The Golem supercomputer, IBM, HP, Oracle, Red Hat, and others. Long-term upwards trend AMD is at the beginning of a long-term upward trend supported by a comprehensive and competitive product portfolio that is still being delivered to the market, AMD referred to this as product ramping. AMD’s most effective products with Zen is EPYC, and the Raven Ridge APU. EPYC entered the market in mid-December and was completely sold out by mid-January, but has since been restocked. Intel remains uncompetitive in that industry as their CPU offerings are retarded by a 40% performance flaw due to Meltdown patches. Server CPU sales command the highest margins for both Intel and AMD. The AMD Raven Ridge APU was recently released to excellent reviews. The APU is significant due to high GPU prices driven buy cryptocurrency, and the fact that the APU is a CPU/GPU hybrid which has the performance to play games available today at 1080p. The APU also supports the Vulcan API, which can call upon multiple GPU’s to increase performance, so a system can be upgraded with an AMD or Nvidia GPU that supports Vulcan API at a later date for increased performance for those games or workloads that been programmed to support it. Or the APU can be replaced when the prices of GPU’s fall. AMD also stands to benefit as Intel confirmed that their new 10 nm fabrication node is behind in technical capability relative to the Samsung, TSMC, and Global Foundries 7 nm fabrication process. This brings into questions Intel’s competitiveness in 2019 and beyond. Take-Away • AMD was uncompetitive with respect to CPU’s from 2011 to 2017 • When AMD was competitive, from 1996 to 2011 they did record profit and bought 3 companies including ATI. • AMD CPU business suffered from: • Market manipulation from Intel. • Intel fined by EU, Japan, Korea, and settled with the USA • Foundry productivity and upgrade complications • AMD has changed • Global Foundries spun off as an independent business • Has developed 14nm &12nm, and is implementing 7nm fabrication • Intel late on 10nm, is less competitive than 7nm node • AMD to fabricate products using multiple foundries (TSMC, Global Foundries) • The market has changed • More AMD products are available on the Internet and both the adoption of the Internet and the size of the Internet retail market has exploded, thanks to the success of smartphones and tablets. • Consumer habits have changed, more people shop online each year. Traditional retailers have lost market share. • Computer market is larger (on-average), but has been declining. While Computer shipments declined in Q2 and Q3 2017, AMD sold more CPU’s. • AMD was uncompetitive with respect to CPU’s from 2011 to 2017. • Analysts look to GPU and Semi-Custom sales for revenue. • Cryptocurrency boom intensified, no crash occurred. • AMD did not increase GPU production to meet cryptocurrency demand. • Blockchain represents a new growth potential for AMD GPU’s. • Pessimism acts as security against a stock bubble & corresponding bust. • Creates cyclical volatility in the stock that can be used to generate profit. • P/E ratio is misleading when used to evaluate AMD. • AMD has long-term growth potential. • 2017 AMD releases competitive product portfolio. • Since Zen was released in March 2017 AMD has beat ER expectations. • AMD returns to profitability in 2017. • AMD taking measureable market share from Intel in OEM CPU Desktop and in CPU market. • High margin server product EPYC released in December 2017 before worst ever CPU security bug found in Intel CPU’s that are hit with detrimental 40% performance patch. • Ryzen APU (Raven Ridge) announced in February 2018, to meet gaming GPU shortage created by high GPU demand for cryptocurrency mining. • Blockchain is a long-term growth opportunity for AMD. • Intel is behind the competition for the next CPU fabrication node. AMD’s growing CPU advantage over Intel About AMD’s Zen Zen is a technical breakthrough in CPU architecture because it’s a modular design and because it is a small CPU while providing similar or better performance than the Intel competition. Since Zen was released in March 2017, we’ve seen AMD go from 18% CPU market share in the OEM consumer desktops to essentially 50% market share, this was also supported by comments from Lisa Su during the Q3 2017 ER call, by MindFactory.de, and by Amazon sales of CPU’s. We also saw AMD increase its market share of total desktop CPU’s. We also started seeing market share flux between AMD and Intel as new CPU’s are released. Zen is a technical breakthrough supported by a few general guidelines relating to electronics. This provides AMD with an across the board CPU market advantage over Intel for every CPU market addressed. 1) The larger the CPU the lower the yield. - Zen architecture that makes up Ryzen, Threadripper, and EPYC is smaller (44 mm2 compared to 151 mm2 for Coffee Lake). A larger CPU means fewer CPU’s made during fabrication per wafer. AMD will have roughly 3x the fabrication yield for each Zen printed compared to each Coffee Lake printed, therefore each CPU has a much lower cost of manufacturing. 2) The larger the CPU the harder it is to fabricate without errors. - The chance that a CPU will be perfectly fabricated falls exponentially with increasing surface area. Intel will have fewer high quality CPU’s printed compared to AMD. This means that AMD will make a higher margin on each CPU sold. AMD’s supply of perfect printed Ryzen’s (1800X) are so high that the company had to give them away at a reduced cost in order to meet supply demands for the cheaper Ryzen 5 1600X. If you bought a 1600X in August/September, you probably ended up with an 1800X. 3) Larger CPU’s are harder to fabricate without errors on smaller nodes. -The technical capability to fabricate CPU’s at smaller nodes becomes more difficult due to the higher precision that is required to fabricate at a smaller node, and due to the corresponding increase in errors. “A second reason for the slowdown is that it’s simply getting harder to design, inspect and test chips at advanced nodes. Physical effects such as heat, electrostatic discharge and electromagnetic interference are more pronounced at 7nm than at 28nm. It also takes more power to drive signals through skinny wires, and circuits are more sensitive to test and inspection, as well as to thermal migration across a chip. All of that needs to be accounted for and simulated using multi-physics simulation, emulation and prototyping.“ Is 7nm The Last Major Node? “Simply put, the first generation of 10nm requires small processors to ensure high yields. Intel seems to be putting the smaller die sizes (i.e. anything under 15W for a laptop) into the 10nm Cannon Lake bucket, while the larger 35W+ chips will be on 14++ Coffee Lake, a tried and tested sub-node for larger CPUs. While the desktop sits on 14++ for a bit longer, it gives time for Intel to further develop their 10nm fabrication abilities, leading to their 10+ process for larger chips by working their other large chip segments (FPGA, MIC) first.” There are plenty of steps where errors can be created within a fabricated CPU. This is most likely the culprit behind Intel’s inability to launch its 10nm fabrication process. They’re simply unable to print such a large CPU on such a small node with high enough yields to make the process competitive. Intel thought they were ahead of the competition with respect to printing large CPU’s on a small node, until AMD avoided the issue completely by designing a smaller modular CPU. Intel avoided any mention of its 10nm node during its Q4 2017 ER, which I interpret as bad news for Intel shareholders. If you have nothing good to say, then you don’t say anything. Intel having nothing to say about something that is fundamentally critical to its success as a company can’t be good. Intel is on track however to deliver hybrid CPU’s where some small components are printed on 10nm. It’s recently also come to light that Intel’s 10nm node is less competitive than the Global Foundries, Samsung, and TSMC 7nm nodes, which means that Intel is now firmly behind in CPU fabrication. 4) AMD Zen is a new architecture built from the ground up. Intel’s CPU’s are built on-top of older architecture developed with 30-yr old strategies, some of which we’ve recently discovered are flawed. This resulted in the Meltdown flaw, the Spectre flaws, and also includes the ME, and AMT bugs in Intel CPU’s. While AMD is still affected by Spectre, AMD has only ever acknowledged that they’re completely susceptible to Spectre 1, as AMD considers Spectre 2 to be difficult to exploit on an AMD Zen CPU. “It is much more difficult on all AMD CPUs, because BTB entries are not aliased - the attacker must know (and be able to execute arbitrary code at) the exact address of the targeted branch instruction.” Technical Analysis of Spectre & Meltdown * Amd Further reading Spectre and Meltdown: Linux creator Linus Torvalds criticises Intel's 'garbage' patches | ZDNet FYI: Processor bugs are everywhere - just ask Intel and AMD Meltdown and Spectre: Good news for AMD users, (more) bad news for Intel Cybersecurity agency: The only sure defense against huge chip flaw is a new chip Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign Take-Away • AMD Zen enjoys a CPU fabrication yield advantage over Intel • AMD Zen enjoys higher yield of high quality CPU’s • Intel’s CPU’s are affected with 40% performance drop due to Meltdown flaw that affect server CPU sales. AMD stock drivers 1) EPYC • -A critically acclaimed CPU that is sold at a discount compared to Intel. • -Is not affected by 40% software slow-downs due to Meltdown. 2) Raven Ridge desktop APU • - Targets unfed GPU market which has been stifled due to cryptocurrency demand - Customers can upgrade to a new CPU or add a GPU at a later date without changing the motherboard. • - AM4 motherboard supported until 2020. 3) Vega GPU sales to Intel for 8th generation CPU’s with integrated graphics. • - AMD gains access to the complete desktop and mobile market through Intel. 4) Mobile Ryzen APU sales • -Providing gaming capability in a compact power envelope. 5) Ryzen and Threadripper sales • -Fabricated on 12nm in April. • -May eliminate Intel’s last remaining CPU advantage in IPC single core processing. • -AM4 motherboard supported until 2020. • -7nm Ryzen on track for early 2019. 6) Others: Vega, Polaris, Semi-custom, etc. • -I consider any positive developments here to be gravy. Conclusion While in the past Intel interfered with AMD's ability to bring it's products to market, the market has changed. The internet has grown significantly and is now a large market that dominates when in computer sales. It's questionable if Intel still has the influence to affect this new market, and doing so would most certainly result in fines and further bad press. AMD's foundry problems were turned into an advantage over Intel. AMD's more recent past was heavily influenced by the failure of the Bulldozer line of CPU's that dragged on AMD's bottom line from 2011 to 2017. AMD's Zen line of CPU's is a breakthrough that exploits an alternative, superior strategy, in chip design which results in a smaller CPU. A smaller CPU enjoys compounded yield and quality advantages over Intel's CPU architecture. Intel's lead in CPU performance will at the very least be challenged and will more likely come to an end in 2018, until they release a redesigned CPU. I previously targeted AMD to be worth $20 by the end of Q4 2017 ER. This was based on the speed that Intel was able to get products to market, in comparison AMD is much slower. I believe the stock should be there, but the GPU related story was prominent due to cryptocurrency craze. Financial analysts need more time to catch on to what’s happening with AMD, they need an ER that is driven by CPU sales. I believe that the Q1 2018 is the ER to do that. AMD had EPYC stock in stores when the Meltdown and Spectre flaws hit the news. These CPU’s were sold out by mid-January and are large margin sales. There are many variables at play within the market, however barring any disruptions I’d expect that AMD will be worth $20 at some point in 2018 due these market drivers. If AMD sold enough EPYC CPU’s due to Intel’s ongoing CPU security problems, then it may occur following the ER in Q1 2018. However, if anything is customary with AMD, it’s that these things always take longer than expected.
The good chance of Free Trial about VEO by Blackminer F1 is coming&revenue $5.3
The revenue of VEO is rising to $5.3/day, it's a good chance of Free Trial by Blackminer F1 This is the entrance to the trial mining: https://www.hashaltcoin.com/en/trial_miners/2 Today's profit of VEO is very satisfying, i would like to share some opinion about the VEO, and you can judge whether Trial it for free or not. VEO is a fully mining public chain without pre-mining. Zack, the main developer of the project,who also uesed to be the formar first CTO of AE , did not mine any tokens in advance during programming. We believe that the VEO would be much valuable in the future. So now ,you have a great chance to mine it by Blackminer F1 for free, even get one in your pocket as a lotto ticket. You can download the wallet here: https://myveowallet.com/ The following is some details about Blackminer F1 In September 2018, Blackminer's first batch of FPGA miners was officially launched, model Blackminer F1. Currently there are 22 algorithms built in. The price is $2000, all in stock. The newly released version of Blackminer F1 is F1+, which comes with three boards and can support same algorithms as Blackminer F1. But with newly updated hardware design, its performance is about 1.6 to 1.8 times of one F1. You can check the daily profit by this page: https://www.hashaltcoin.com/en/calculation There are some third party reviews： ruplikmastik666's test review: https://bitcointalk.org/index.php?topic=5039924.0 Bittawm's review: https://bitcointalk.org/index.php?topic=5065403.msg47689832#msg47689832 The Bitcoin Miner Youtube channel review: https://youtu.be/lK2aACwneks The official Links: Official Website:https://hashaltcoin.com/Official Discord:https://discord.gg/eUNRSgy (very active, mainly to share and discover innovative cryptos and announce development progress) Bitcointalk ANN: https://bitcointalk.org/index.php?topic=5029989.0 Sales Manager : Lili whatsapp:+8618612535678
Why I see Virtcoin as a $200 coin when really considering ASIC resistance
First of all, Vertcoin does indeed have a tremendous community, and this is not to be understated. However, this is only a fraction of the value position of this coin. I just want to expand on the ASIC resistance thing a bit. As an electrical engineer who has actually designed ASIC's, I do have some background on this. What I can tell you is that this term "ASIC Resistant" is that it is a little bit misleading. In theory, any algorithm can be turned into an ASIC. An ASIC or Application Specific Integrated Circuit is a digital or analog or mixed analog digital circuit that has been cast into Sea of Gates, Semi-Custom, or Full Custom ASIC technology. The cheapest route is Sea of Gates. If one didn't want to do a Sea of Gates ASIC, they could implement an algorithm in a FPGA, or Field Programmable Gate Array. Altera and Xilinx are the dominant players here. In the early days of Bitcoin, there were many FPGA miners, this was a very common way to mine Bitcoin. Overall, It takes somewhere between USD $50,000 to $1,000,000 to make an ASIC. It's an expensive process. There is a tremendous amount of engineering, where the circuit is designed in System Verilog, Verilog or VHDL, and very extensive testbenches to make sure that the when the chip is made it works the first time. Engineers prototype ASICs in FPGA's, and the development boards for ASIC emulation can cost $20k or more just in themselves. Then the design goes to a foundry where the chip is made, and that will be expensive, $50k to $500k. So there has to be motivation to make an ASIC, such as high volume chip sales. For Sea of Gates technology, a rule of thumb is that there is typically a break even point when a company sells 1,000 to 2,000 chips a year that has been made into an ASIC. That is because Sea of Gates is about a $100k process. The ASIC Resistance of Vertcoin is not technology related, i.e. the algorithm that is currently being used could be made into an ASIC. What makes Vertcoin ASIC resistance is the commitment of the team to change the algorithm if someone does make an ASIC to mine Vertcoin. This is what gives Vertcoin it's value position. I really appreciate that! This is a de-facto way to limit the power of miners, in one simple swipe. How wants to deal with this Bitcoin forking situation anymore? At this point, with the upcoming fork, it seems more and more unnecessary. I see Bitcoin as a storage of value layer, and other coins such as VTC and LTC as transaction layer coins. To me what gives VTC value is the intention of the community AND the consequent action of it.
Satoshi Nakamoto said that biggest flaw in Bitcoin network are miners. That's because consensus algorithm, TX hash rate is dependent on miners calculation. Basically, we are consuming a lot of electricity to gather multiple tx in a block, in order to 3 Chinese mining pools can smash that block and take the Bitcoin reward. And if is not enough, the mining pools can inject fake tx in the network to clog it, so TX fees for us (peasants) will go higher.
Why we are using hardware and electricity to create one block?
Why Consensus algorithm is dependent on a new block creation?
Where is the new Internet we all wanted back in 2009-2010 where millions of computer would be the network ?
https://medium.com/@Skycoinproject/cyberbalkanization-and-the-future-of-the-internets-f03f2b590c39 A) Skycoin is the bigger brother of Bitcoin. Early developers of Bitcoin knew that Miners will control the Bitcoin network in the future, so a part of them started to research a new Consensus Algorithm called Obeliskhttps://www.skycoin.net/blog/posts/obelisk-the-skycoin-consensus-algorithm/ B) Skycoin resolved 51% attack, sybil attack, has 0 TX fee, 1-2 sec for a tx , and is private. But the most important thing. Skycoin is the only crypto out there who fixed the problem of volatility of a cryptocurrency. What's that ? Imagine if price of Skycoin goes higher and higher, peasants will ''HoDL'' it, so the term ''currency '' is lost. Why someone would spend an asset that goes higher and higher? B1) One Skycoin kept in the wallet is creating non-stop a second currency called CoinHour. 1SKY is creating 24 CoinHours per day and so on. Coinhour is backed by bandwidth => Skywire(Software Defined Network) is the New Internet that gives Skycoin a real value, a commodity level value. https://www.youtube.com/watch?v=-CbSdVIwr8E B2) In this ecosystem Skycoin behaves like an equity and CoinHour is the real currency. For example Skycoin Price can reach 1 million and the price of Coinhour is independent, its equilibrium is reached by supply and demand of the market https://explorer.skycoin.net/app/blocks/1 C) Ethereum has a buggy prog language and all shitcoins are on Ethereum Blockchain (Database) with only 30 tx/s. Why would someone would gather all the data on ONE Database?! C1) Skycoin created CX ( first deterministic cryptographic prog language) https://www.skycoin.net/blog/posts/cx-overview/ C2) Skycoin created Fiber https://www.skycoin.net/fibe ( basically you can create your own blockchain with 300-3000 tx/s, private or public , with hardware customization ( law firms, government entities and so on as early adopters) D) Skywire is the New Internet built at the Hardware and Software level -Skywire is hardware agnostic -Skywire has its DYI Antennas - Skywire has FPGA boards -Skyiwire has 10k nodes online ( more than TOR) Bibliography:
TLDR: one neighbor is rendering a movie. He wants 1 TB/ s. He will pay CoinHour to his neighbors to borrow bandwidth capacity of their sky clusters and antennas. Skycoin Address : 25139AGYjwGwgKMZEA268GbJyXrZGWF533i
Why is getting into FPGA's such a crappy experience?
I'm a hobbyist and this is my first venture into FPGA's. I understand how FPGA works in theory. It's just a bunch of combinational logic connected by clock-connected flip-flops, whose topology and combinational functions can be programmed with a high level language. I bought a Xilinx board from embeddedmicro.com and I'm going to work through their tutorials. All I want to be able to do is specify a bunch of like registers and crap, and how to connect them with clocked flip-flops to do some really basic stuff like a simple CPU with 1-2 custom instructions or something. So why do I have to download a GIANT SIX GIGABYTE FILE TO DO THAT? What could this software possibly be doing that it needs to be that big? In a sane world, all I'd need is a board and a simple compiler which just takes the high-level language and turns it into the topology file to upload to the board. But in the insanity in which I am currently living, I have to download some gigantic IDE that is going to be huge and probably slower than mining bitcoins on an NES. I don't know because IT'S STILL DOWNLOADING. So to even get to the download, I had to log into the website, register, and give them a name and a physical address (and God forbid I should leave the "Company" field empty!). The Licensing crap on their website looks like you need an MBA to understand it. This company sells pieces of hardware, FFS! Why in Stallman's name can't they just make the software FOSS and let anyone download it instead of all this BS about WEBPACK this and annual upgrade that? Xilinx, in case you haven't noticed, in order for anyone to actually use your software, THEY HAVE TO BUY A CHIP OR BOARD AND YOU CAN MAKE MONEY OFF YOUR CUSTOMERS THAT WAY. CHARGING FOR SOFTWARE OR HAVING A BYZANTINE PROCESS FOR GETTING A FREE LICENSE MAKES ZERO SENSE FOR A HARDWARE COMPANY. Anyone know a place where you can just buy an FPGA board, plug it into a USB port, sudo apt-get install some FOSS compiler, type your Verilog or VHDL or whatever into emacs, run 2-5 commands and have a running design? If such a place doesn't exist, some startup needs to disrupt this industry. If you make it easy for people to develop for your HW, those devs will be inclined buy your product just to make their lives easier.
Hello my fellow shibes, sit back for a sad story :( So a year or so ago I played around with bitcoin mining for a few weeks. I've got an assortment of Altera development boards, so naturally I tried mining BTC on them. They did pretty okay -- I'm getting 120Mhash/sec on the Altera SOCkit (Cyclone 5) using basically someone's open-source project ported to this device, with no optimization. I had also written an optimized core for a smaller device and doubled its performance, so it stands to reason that I could probably get around to 200Mhash with some careful coding on this board. Pretty reasonable; competitive with lower-end graphics cards at significantly lower power dissipation. Of course, by the time I was doing this, commercial ASIC miners were just coming on the scene and blow FPGAs out of the water! I mined a few mBTC, then moved on to other things and forgot about cryptocurrencies for a while. Along comes Dogecoin (woof!)! Fire up my laptop with GT620M graphics and get 25kHash. Decide to look at an FPGA implementation again (starting with an open source project here ). Port it to the Cyc5, flash it, and get... 1.5kHash. Not good. Now, there's room for ~4 cores, so that's 6kHash theoretical with the unoptimized core. I started thinking of potential optimizations, and even started writing a new core, but paused and did some calculations and the results aren't good. It really is a memory-bound algorithm! The FPGA only has enough memory for 4 full scratchpad blocks (or 8 half, etc.), meaning that folding more than 4 processing cores into a pipeline is more than a waste. A 4-deep salsa pipeline has really long path lengths, making for a low max clock rate... but a longer pipeline (with a correspondingly faster clock rate) has more wasted cycles. We can do partial scratchpads and regenerate the missing data to keep the pipeline more full, but this has diminishing returns, some quick calculations it's asymptotic to about 2x improvement (and this is theoretical; adding more complexity overall tends to slow the core down). The thing is, modern GPUs are running 256-512bit memory interfaces, at 2-4GHz, to 2-4GB memory. That memory bandwidth and size is just something an FPGA cannot reach, at least not at a competitive price point. Even the high-end Altera FPGAs don't have more than a handful of MB of RAM. Moreover, it's hard to even get enough I/O pins to implement a 512-bit memory interface on an FPGA, and it certainly won't be running at 4GHz! So, TL;DR: GPUs win this round! A custom FPGA board with a very wide external memory might be competitive, and an ASIC could be competitive (but would basically be a GPU). No FPGA for this shibe :(
A FPGA opensource miner has just been released running at 80Mhps but at a cost of $585. The efficiency is stated below quoted from a post in the thread.
At 80 MHps, I will need at least 3 of these to achieve a single 5830 hashrate. That is $595.-x 3 = $1785.- at full price, vs. $190.- for the 5830. Giving the 5830 is consuming $11.- a month in electricity, and assuming this board will consume zero electricity, it will take more than 145 months, or 12 years to recover the investment, always comparing to a 5830.
Apologies but no more development information will be posted. I've been offered a 25% share from someone that owns 2 FPGA clusters. If you haven't seen that type of hardware before think a 156 FPGAs per machine.
From those posts what we can understand is that the factors that affect FPGA now are high procurement cost, low running cost and ease of scalability . What this means is that with the increasing total hash rate of the network (30Ghash/day last difficultly adjustment) the question becomes when would the difficulty render GPU inefficient in contrast to running cost? Remember to take into account FPGAs are usually run in clusters and even though it would not be beneficial to buy one outright, those who have access to FPGA are the first movers and eventual dominant forces of the mining market. Of course, in the end, ASIC is where it's at. Anyone? =D Edit: read more stuff, added info.
So you're sick of just mining on your GPU, and not a fan of the electric bill after a month of mining? There has to be a better option out there than your loud GPU in your gaming computer. There is! Shortly after GPUs became popular for bitcoin mining, enterprising folks started looking at other things they can re-purpose to mine bitcoins more efficiently. Around mid-year 2011, the first devices sprang up that are called FPGAs or Field Programmable Gate Arrays. These are nothing new to the hobbyist community, they've been around for a while for crackers and other security-conscious folks looking at ways to defeat cryptographic locks. Hey! I know something that uses cryptographic calculations to secure its network! BITCOINS! Yep, so some miners developed their own boards and slapped some FPGA chips on them (most commonly the Spartan-6), and wrote specific firmware and "bitstreams" to more efficiently calculate bitcoin hashes. The first generations were sort of slow, but still they had better efficiency than a GPU. Some of the latest generation included the Icarus boards, Cairnsmore, x6500, and ModMiner Quad. In early 2012(i think my timeline is right), Butterfly Labs(BFL) was selling their own FPGA miner that hashed at 800 Mhash/s using 80 watts and only cost US$600 amazing! These grew very popular, but people could see that FPGAs still weren't the most efficient way to hash their shares. BFL then announced that they would be designing their own chips that would be orders of magnitude faster than anything ever seen. These would be the ASICs (or Application Specific Integrated Circuit)everyone is raving about. ASICs are--as the name implies--specifically designed for one thing, and one thing only. Bitcoins. This is all it can do, and can't really be repurposed like an FPGA to other applications. Who wouldn't want a US$150 "Jalapeno" that hashes at 3.5 GIGAhashes/s using only power from a USB port?? Crazy! So summer 2012, BFL says they will ship before Christmas. Various things happen and we now still don't have any confirmed ship dates from BFL. A few other companies have sprouted up, ASICminer which I believe is developing their own chips to mine themselves, but in a responsible way as to not threaten the network with a sudden influx of hashing. bASIC was a fiasco that was developed by the creator of the ModMiner Quad(which is actually a fantastic miner, I own one, and love it.) where he took many preorders, promised lots of people amazing ASIC performance, but in early 2013 the stress of the whole endeavour got to him and he gave up, refunded money(I think it's still being refunded now, or maybe it's been cleared up already.) Avalon is the only company we know has ASIC mining hardware in the wild. It is not certain exactly how many are out there, but they have been confirmed by independent sources. The Avalon units are expensive(75 BTC) and have been in limited production runs (or batches) of a few hundred units that were pre-sold out very quickly. All of this info is gleaned from the Custom Hardware forum over at bitcointalk.org over the past year or so I've been involved in bitcoin. I may have some facts wrong, but this is the gist of the situation and hopefully gives you an insight on the state of the hardware war against bitcoin Thanks for reading!
Battlecoin [BCX]: a new (and ambitious) game changer in the world of cryptocurrency - Interview with JackofAll
"When it gets too hard for mortals to generate 50BTC, new users could get some coins to play with right away." - Satoshi Nakamoto, 2010 Time goes faster when one talks about IT but, when talking about cryptocurrencies, time goes faster yet. How long have passed since we’ve heard of mythic characters from the beginning of cypherpunk era – like John Gilmore, Eric Hughes and Tim May – and those from the early days of digital currency of Wei Dai, Satoshi Nakamoto (real or not, it doesn't matter) and others, passing by the current Bitcoin and other cryptocurrencies whales – that now are experts or well established entrepreneurs – to us, the Fourth Generation? Twenty-two years. However, if we consider that the real deal has only begun on 2009 with Satoshi’s Genesis Block, then the perspective of the quantic leaps we are giving become clearer. 2009 - Satoshi Nakamoto deployed tools and strategy to people regain control 2011 - FPGA Bitcoin Miners appear. Regular GPU miners start to struggle more and more for Bitcoins 2013 - ASIC miners enters the market, and an official Bitcoin mining industry arises, taking small miners – regular people – completely out of the Bitcoin mines, forcing them to mine – and to create – many different altcoins with many different purposes. 2014 - Now, five years after Bitcoin Golden Dawn and almost four years after the white ninja Satoshi’s disappearance, another cryptowarrior arises on battlefield to give (hash) power back to the people. Was Satoshi foreseeing what was about to come? However, this time, not so philanthropically… Meet JackofAll, Head Developer of Battlecoin [BCX]. Andre Torres: Jack, are you there? JackofAll: Yes. AT: Thanks for talking to Cryptonerd.co and also to Criptonauta.net (in Portuguese and Spanish). Shall we begin? JoA: Yes. AT: With the ASIC TH/s mining hardware invading Bitcoin mining pools as FPGA once did - and are about to do again, this time in Litecoin mines - common people are getting more and more trouble to mine big cryptocurrencies and have ROI. The cryptomines, crowded with dead and injured miners, have become - literally - a battlefield. Now is my question, why the name Battlecoin? JoA: Battle coin is by design of course. Just like everything that we do. And yes, it is for the reasons that you might think. It has become a "Battle” out there to get coins made, to get coins listed and just to be able to mine against the early guys that have all the hardware. So I came up with the name Battlecoin rather solidify what we are doing. We are battling for hashpower and BCX will give the opportunity for anyone (with enough battlecoins) be able to have control of a comparable amount of hashpower that the elite crypto whales have. We also have a little controversy that surrounds us, as some people would say that we are waging war on altcoins by the nature of what we are doing. People will Battle it out to keep control of our hashpower. It could possibly lock some of the weaker coins ...for instance if we are paid to switch to a coin with low difficulty or one that has low network support on average and we are paid to drive the difficulty up . There are several things that could happen to some coins... Bad things they might not react how the developers have intended. We might fork coins or even lock coins up when our pool stops mining. So not everyone will like us. So here again another battle to try to walk a public relations line. So in short, I would say that Battlecoin represents all of the battles we have gone through and the many more we have in front of us. We also have other app ideas that would complement the Battlecoin Brand. AT: Nice :) Do you watch anime/read manga, Naruto, more specifically? I mean, Battlecoin project is like an army that will, more than just fight, direct the ways of the wars by influencing/disrupting the market by its own will or by contract? JoA: Actually I do not, as my schedule does not permit such luxury... but my sister does and as a matter of fact she is a very good cartoon artist and most of her subjects are anime. But relation to any specific object or character is purely coincidental. I will check it out now though =) AT: I did that relation to Naruto because there is an organization called Akatsuki, and when you replied my first question, it immediately reminded me of them. JoA: Nice. AT: However, you did not reply my previous comment... but then I've got the idea correctly... or not? JoA: Ask the question again and I will try to sum it up… (Jack remembers the question) “Battlecoin project is like an army that will, more than just fight, to direct the ways of the wars by influencing/disrupting the market by its own will or by contract?” Yes exactly. It will do all of the above it will have a big influence in the market. AT: I was just thinking about that. JoA: Yes that is why such a controversy. There are people that don’t want to see this happen but I feel it is part of altcoin evolution. AT: indeed. It’s like a new powerful ninja/warrior coming into the game, not seen since Satoshi's era. JoA: Yes... You Get it. AT: What about competitors? Is there anyone on the same level? JoA: No. Not really. We are the only ones that I know of that is taking the multipool to this level. They have the hashpower but they do not let the public decide where to put it. They just follow an algorithm that directs to "most profitable" coin. We will give that, plus add some human power to the equation. As far as I know, we are the only one that is working on having a voting system that controls it. Giving it that human element that other concepts lack. AT: But it needs more than a swallow to make a summer... You do have some other strong companions, don't you? Also, you talk as "we". Are there other Generals in Battlecoin army? Who are they? JoA: Well, it was originally my concept but I could not embark on this project alone. I have one partner that is above the board Mr. Big. We Kind of met through a mutual acquaintance and formed a solid partnership. Mr. Big has several projects that I am not sure of what I have liberty to discuss but I do know one of those projects will be to provide hosting services and as I said before we are working on some application ideas that are still in concept. I also have a private backer that would like to remain nameless. In addition, I have a few consultants that I work with too and I consider them a part of the team of course. Our team is growing daily... and you have to remember this is a project that involves the community, so in my eyes they are part of US too. AT: Yes... or all the strategy developed might go to the floor, since the project will require a LOT of hashpower... JoA: I am hoping to have camaraderie developed and rivals be formed over this concept. I want people to be talking in War rooms about what coin they want to hit... Strategy for pump and dump coins, etc. Yes, it will require a lot of hashpower and I hope that people will want to give us that hashpower, because they will get paid top $$ for that. We won’t be making the revenue from the battlecoins that get spent... That money will be split between the miners in our pool as subsidy to make sure that they continue to make as much or more than they could make mining anywhere else. AT: Now that the strategy has been covered and we are entering more into the battlefield grounds, when will the battles begin? JoA: I cannot confirm a release date for Phase 3, which includes the "arena", but Phase 1 Will be open to the public on this Friday 9 minutes after 9 pm. The wallets will be linked on our website first then we will post on BCT and then we will have a Big giveaway starting shortly after to kick it all off. I will also provide a mirror on Google drive. We should have a Block crawler and a faucet too, if all goes well. AT: That sounds great. Andre Torres: So, during Phase 1, you will gather your ranks that will battle when Phase 3 starts... On what consists Phase 2? JoA: Phase 2 will be where we determine the market value of a Battlecoin. It will need to be listed on an exchange to determine the FMV of the coin. We originally were going to dictate the price on our own exchange but we feel like to keep with the nature of crypto it would be best to let the free market decide. We have been in touch with a couple of exchanges that have interest in our idea as ours is one that would form a close relationship and provide an elevated amount of trade volume with the exchange that carries our brand . AT: Battlecoin already have an exchange of preference? On the other hand, perhaps some exchange have already manifested interest on trading BCX with exclusivity? JoA: I wish to decline to answer as negotiations are still going on. AT: A wise decision. (laughs) AT: Now, from the battlefield to the weapons of combat... could you talk a bit about the mechanics of Battlecoin? JoA: It is pretty straightforward. We did a small pre-mine to make sure we had enough coin for the 3rd phase. And we are doing a small bonus block mine in the beginning to give all of our supporters plenty of Battlecoin to play with for the phase 3 open. And then from there it is a solid 50 coins a block every 2min we should find a block... Difficulty adjusts every block with a 10-block look back. I think this will provide a very smooth operating coin providing plenty of coin to the market for the use of our services. There will be a proof of stake 1% every 10 days with maturity of 20 days. This is to reward the users for holding our coin so they will have plenty to use when the time comes. AT: As we finish this interview, any other comments you might like to add? JoA: I think we have covered quite a bit and we have much more to come in the future. I appreciate all of your time and hard work! AT: Me too. I am very glad of this talk and for having the opportunity of talking beforehand with the mastermind of a project than can be a huge game changer on cryptocurrency world. JoA: Yes, it is nice to be able to talk directly to the people that make it happen. I wish I had back in the day... lol. The advantage and tools that the newcomers have… AT: Let us make new days :) This interview was made on 01-07-13, on #CryptoNerd mIRC channel. Portuguese and Spanish versions are available on criptonauta.net. BCX refers to BattleCoinEXchange. It is not related in any form to BitcoinEXpress.
FPGA Bitcoin Mining. At the foundation of block creation and mining is the calculation of this digital signature. Different cryptocurrencies use different approaches to generate the signature. For the most popular cryptocurrency, Bitcoin, the signature is calculated using a cryptographic hashing function. The development board can probably be built with many, many pipelines as there is much more room to build in a larger, faster FPGA. In the big cluster machines they have lots of small FPGA chips that can only have a limited sized programming meaning less work per FPGA chip. This project hopes to promote the free and open development of FPGA based mining solutions and secure the future of the Bitcoin project as a whole. A binary release is currently available for the Terasic DE2-115 Development Board, and there are compile-able projects for numerous boards. Bitcoin mining is measured in Mega-hashes per second (Mhash/s). In order to make $10 at today’s rate of ~$120/bitcoin, you would need to use your Spartan 6 development board for about eight months. Oh and that doesn't include the cost of electricity. Check a good bitcoin mining calculator if you want to check the results for today’s rates. Powerful FPGA Mining Our CVP-13 makes FPGA cryptocurrency mining easy! With a single board, you can get hash rates multiple times faster than GPUs! No more complex rigs with lots of maintenance. Up to three CVP-13s can run under a single 1,600W supply, liquid cooling loop, and motherboard. Easy to Use
The Largest Mining Farm in Indonesia - Duration: ... Maximator FPGA development board - unboxing and walkthrough - Duration: ... Design and Implementation of a Bitcoin Miner Using FPGAs ... Open-Source Tools for FPGA Development - Marek Vašut, DENX Software Engineering Programmable hardware is becoming increasingly popular in the recent years, y... T4D #84 - Pt 2 Bitcoin Mining, BFL ASIC vs FPGA vs GPU vs CPU - Duration: 28:50. mjlorton 63,584 views. 28:50. FPGA graphics accelerator with 180MHz STM32F429 controller - Duration: 3:07. Ultra96 is an ARM-based, Xilinx Zynq UltraScale+ MPSoC development board based on the Linaro 96Boards Consumer Edition specification. The processor is a Xili... Ben Heck's FPGA Dev Board Tutorial - Duration: 24:52. element14 presents 178,452 views. ... BitCoin Mining FPGA Card - Duration: 4:06. CarlsTechShed Recommended for you. 4:06.