rayon Posted May 25 Share Posted May 25 36 minutes ago, austinpop said: Very interesting. Could you. perhaps in the performance thread, share some data that shows how this setup of 1 Optane 4800x drive speeds up PGGB processing time, compared to multiple M.2 NVMe drives? Did you collect any perfmon data on disk utilization and throughput? I can check at some point. The performance difference isn't huge either direction. It may even be slower than those NVMEs. To me the the main point is that this one $200 drive gives me performance that is roughly at the same ball park and it has basically unlimited write durability. For <13min tracks this one drive basically solves the same problem at much cheaper price and it's a permanent solution. When writing the outputs, my CPU utilization stayed nicely at same 80% as with Samsungs, drive not becoming the bottleneck. In the output copy phase drive was often at 100% when doing mixed r/w. With Samsung drives drive utilization was much lower as I had more drives sharing the load, but didn't see night and day difference in processing speed nor CPU utilization nevertheless. But will send some benchmarks some day near future. Link to comment
seeteeyou Posted May 27 Share Posted May 27 https://audiophilestyle.com/forums/topic/62699-a-toast-to-pggb-a-heady-brew-of-math-and-magic/page/91/#comment-1281213 2 hours ago, jeti said: Just curious, which windows OS gives the best performance for pggb, everything else equal? The RAM performance / disk caching and buffering etc. seemed to be somewhat "lacking" to say the least ever since the introduction of Windows Server 2019 / Windows 10 Version 1809 https://community.spiceworks.com/t/server-2019-network-performance/724968/647 Quote Alexander Fuchs The “useless caching” in ram is intentional, has existed since Windows Vista and is called “Ready Boost”. Today it is hardly noticed anymore because it silently loads many OS and app components into RAM on suspicion. Edge, for example, has been loading a large part of itself into RAM at boot for months because many HTML components are constantly used for rendering various windows and apps. This is a kind of “pre fetch”, which improves performance for the vast majority of Windows users. Linux, swiw, also uses almost the entire RAM with cache, unless it is needed for something else. Of course, you as a user are free to adapt this behavior to your needs, as are your customers. In individual cases, the default settings are certainly not optimal - which is why we have experts like you in the Windows “ecosystem” who understand the interrelationships and can apply them for their customers. Please understand that we cannot possibly make an OS that suits all users and scenarios “out of box”. We rely on you for that https://community.spiceworks.com/t/server-2019-network-performance/724968/648 Quote According to Passmark, my FSB 2 (W10-1703), equipped with 16 GB DDR3 RAM that runs at 1066 MHz in dual channel, has the following RAM performance. And here for comparison, the performance values of a high-end server equipped with two Xeon Gold 6254 and Hexachannel DDR4 under Windows Server 2019. I’m still missing the right words at the moment, so … https://community.spiceworks.com/t/windows-server-2019-strange-file-copy/764279/16 Quote Agree. That is a known issue with Windows Server 2019. There is a workaround that we are practicing to copy large files at a solid, stable speed. The trick is using xcopy or robocopy instead of Windows Explorer with /j switch to bypass disk caching and buffering. https://community.spiceworks.com/t/windows-server-2019-strange-file-copy/764279/17 Quote Yes using unbuffered transfers work, it bypasses the FCM (File Cache manager) Which apparently is the issue…using xcopy /j changed the transfer from 2 minutes to 18 sec on a 10 GB file… But most people use Copy/paste from gui and also loading a roaming profile, isn’t capable of using unbuffered (what I’m aware of) So the xcopy /j is a great work-around, but no a solution Earlier versions of Windows Server should be more lightweight to begin with https://www.mediafire.com/folder/28atb0rkfw3po/LiteOS And then quite a few audiophiles also went for a particular version that sounded better for their systems, therefore it might have something to do with better latency then? http://jplay.eu/forum/index.php?/topic/5608-best-ever-windows-version-for-audiophile/ Even Windows Server 2012 is compatible with .NET 9.0 https://github.com/dotnet/core/blob/main/release-notes/9.0/supported-os.md#windows Perhaps it's still good for PGGB•IT! https://audiowise-canada.myshopify.com/products/pggb-it Quote Windows 10/11 64-bit or any Windows Server system. Link to comment
jeti Posted May 27 Share Posted May 27 It seems that CPUs with more cores tend to have lower process frequence, like base frequence at 2.5 or so. Should one opt for higher frequence over number of cores? Link to comment
jeti Posted May 27 Share Posted May 27 Sorry, I meant for building a PC with the purpose of running pggb solely. Link to comment
Zaphod Beeblebrox Posted May 27 Author Share Posted May 27 1 hour ago, jeti said: It seems that CPUs with more cores tend to have lower process frequence, like base frequence at 2.5 or so. Should one opt for higher frequence over number of cores? There is a balance, the newest generation desktop CPUs with 16 cores offer better cost to performance as they can run at a higher frequency and also be slightly overlocked. @austinpop had provided some tips on doing this earlier in the thread. 1 hour ago, jeti said: also Pcie 3.0 vs Pcie 4.0? For the purpose of paging NVME drive read and write speeds are important too so gen 4 will be better and gen 5 too kennyb123 1 Author of PGGB & RASA, remastero Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks System: TT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN Link to comment
Zaphod Beeblebrox Posted May 27 Author Share Posted May 27 6 hours ago, Schafheide said: These figures might be informative for some folk. Task: Convert a recording of Mahler Symphony #2 (82 min 27 sec - including 13 min 28 sec extras) 24-96 PCM into DSD1024 + 9th order mod + EQ. Result: approx 13 1/2 hours. How long was the longest track? Author of PGGB & RASA, remastero Update: PGGB Plus (PCM + DSD) Now supports both PCM and DSD, with much improved memory handling Free: foo_pggb_rt is a free real-time upsampling plugin for foobar2000 64bit; RASA is a free tool to do FFT analysis of audio tracks System: TT7 PGI 240v + Power Base > Paretoaudio Server [SR7T] > Adnaco Fiber [SR5T] >VR L2iSE [QSA Silver fuse, QSA Lanedri Gamma Infinity PC]> QSA Lanedri Gamma Revelation RCA> Omega CAMs, JL Sub, Vox Z-Bass/ /LCD-5/[QSA Silver fuse, QSA Lanedri Gamma Revelation PC] KGSSHV Carbon CC, Audeze CRBN Link to comment
rayon Posted May 27 Share Posted May 27 1 hour ago, jeti said: also Pcie 3.0 vs Pcie 4.0? My take on this that it doesn't matter. To me it looks like any drive itself becomes the bottleneck before the bus. To me it looks like we are talking about 64kb random reads, but somewhat sequential writes and when writing, even gen 3 doean't become the bottleneck. I guess P5800X is the only drive that may be able to saturate gen 3 with 64kb random reads. If you do DSD1024, look for fast 64kb random reads, fast sustained write and long durability. Link to comment
Schafheide Posted May 27 Share Posted May 27 1 hour ago, Zaphod Beeblebrox said: How long was the longest track? The longest was 21:16, the next longest was 19:16. Zaphod Beeblebrox 1 Link to comment
Popular Post rayon Posted May 27 Popular Post Share Posted May 27 @austinpop I won't start doing deeper analysis right now, but wanted to quickly give you something anecdotal: Samsung x 3: 5m 33s redbook 51min 57min total time 6m 18s redbook 58min 24s total time 7min 15.5s redbook 1h 9min 16s total time 7min 51s redbook 1h 15min 23s total time Intel P4800X solo: 5min 25s redbook 37min 13.76min 6m 9s redbook 41min 5s total time 6min 47s redbook 54min 0s 7min 43s redbook 2h 28min 25s I have to further investigate if that 7min 43s redbook on P4800X is just an outlier (due to some nightly Windows update or something else weird). Otherwise it seems that single P4800X is indeed slightly faster than three Samsung drives. In perfmon they look visually pretty much identical. Zaphod Beeblebrox and austinpop 1 1 Link to comment
seeteeyou Posted May 27 Share Posted May 27 http://www.lmdb.tech/bench/optanessd/imdt.html Quote Using the IMDT with Optane SSDs can significantly boost performance with larger DBs, making a system with limited RAM perform as if it had much more physical RAM. There are limits though, and it appears that exceeding about a 4:1 ratio of Optane:RAM will nullify the performance benefits. (Note that this 4:1 ratio is based on uncertain results from the 1200GB test, so take it with a grain of salt. Your mileage will certainly vary as far as the ratio of Optane:RAM. Unfortunately IMDT might not be "ideal" since the license alone would cost more than 100 bucks https://www.provantage.com/intel-memdrvopt085gb~7ITEN0XM.htm https://www.tech-america.com/item/intel-memory-drive-technology-5-years-standard-support/memdrvopt085gb And then it's only good for Linux, not to mention the fact that already reached EoL https://williamlam.com/2020/12/intel-nuc-with-512gb-memory.html https://www.intel.com/content/www/us/en/support/articles/000059551/memory-and-storage/ssd-management-tools.html Quote Intel® Memory Drive Technology (IMDT) reached End of Life on June 30, 2021. This includes software bundled hardware SKU’s and the standalone Intel® Memory Drive Technology software license. Top benefits to using Intel Optane NVMe for cache drives in VxRail https://infohub.delltechnologies.com/en-us/p/top-benefits-to-using-intel-optane-nvme-for-cache-drives-in-vxrail/ OTOH, there's something unique about 3D XPoint (e.g. Optane DC P4800X) versus NAND flash https://news.ycombinator.com/item?id=17003713 Quote Sequential read/write is usually optimized for burst workloads on consumer ssd's. The Intel 900p can sustain its peak read/write performance for hours. https://wccftech.com/intel-3d-xpoint-optane-ssd-dc-p4800x-performance/ Quote Whereas traditional SSDs hit peak performance during the initial moments after a load is applied, they quickly settle down to a performance level that is many tiers below the original speed. The Intel Optane memory appears to be completely immune to this effect and has no trouble delivering the rated speed consistently throughout the testing done by Tom's Hardware. In fact, this might be the first SSD where you actually get the performance that is advertised 24/7. Looking at the Random Read, Write and Mixed benchmarks as well we see the same trend: The Optane memory based DC P4800X is in another league when compared to its NAND based siblings and can be safely called a disruptive innovation in this area. Link to comment
austinpop Posted May 27 Share Posted May 27 12 hours ago, rayon said: @austinpop I won't start doing deeper analysis right now, but wanted to quickly give you something anecdotal: Samsung x 3: 5m 33s redbook 51min 57min total time 6m 18s redbook 58min 24s total time 7min 15.5s redbook 1h 9min 16s total time 7min 51s redbook 1h 15min 23s total time Intel P4800X solo: 5min 25s redbook 37min 13.76min 6m 9s redbook 41min 5s total time 6min 47s redbook 54min 0s 7min 43s redbook 2h 28min 25s I have to further investigate if that 7min 43s redbook on P4800X is just an outlier (due to some nightly Windows update or something else weird). Otherwise it seems that single P4800X is indeed slightly faster than three Samsung drives. In perfmon they look visually pretty much identical. This was an excellent experiment, and the data looks very promising. Despite being only PCIe gen 3, the P4800X Optane drive has the advantages of very low latency, and very high IOPS with 4k random reads, even at a queue depth (QD) of 1. You made the very reasonable assumption that Windows paging would benefit from these hardware characteristics, and it looks like you're right. One important thing this data tells us is that the paging I/O is an integral part of the PGGB workload. So — as you've found — speeding up the disk throughput leads to reduced (i.e. improved) PGGB completion times. As regards this data point: Quote 7min 43s redbook 2h 28min 25s This may just be an one-off, so it bears review. Even if it is an anomaly, the fact remains that at some track length, the I/O load on the disk will saturate it (cause it to be 100% busy), at which point of course, you will see a nonlinear increase in PGGB completion time. I should mention that another ASer who is also a beta tester has done similar experiments, except in his case, rather than use an Optane drive, he tried to configure multiple PCIe Gen 5 NVMe Crucial T705 drives. This turns out to be challenging on his ASUS Z790 motherboard. He'll be posting his results soon. However, he was able to run successfully with a single T705 drive, and like you, he achieved a reduction in PGGB processing time due to the faster disk throughput. The key question going forward, with either of these approaches is: how can we scale this? One of the challenges, that several people have experienced now, is that using heterogeneous paging drives tends to run PGGB at the rate supported by the slowest drive. So, to get low PGGB completion times AND sustain this for longer tracks requires 2 or more identical fast drives. @rayon are you planning to try deploying a second P4800X? rayon 1 My Audio Setup Link to comment
rayon Posted May 27 Share Posted May 27 1 hour ago, austinpop said: This was an excellent experiment, and the data looks very promising. Despite being only PCIe gen 3, the P4800X Optane drive has the advantages of very low latency, and very high IOPS with 4k random reads, even at a queue depth (QD) of 1. You made the very reasonable assumption that Windows paging would benefit from these hardware characteristics, and it looks like you're right. One important thing this data tells us is that the paging I/O is an integral part of the PGGB workload. So — as you've found — speeding up the disk throughput leads to reduced (i.e. improved) PGGB completion times. As regards this data point: This may just be an one-off, so it bears review. Even if it is an anomaly, the fact remains that at some track length, the I/O load on the disk will saturate it (cause it to be 100% busy), at which point of course, you will see a nonlinear increase in PGGB completion time. I should mention that another ASer who is also a beta tester has done similar experiments, except in his case, rather than use an Optane drive, he tried to configure multiple PCIe Gen 5 NVMe Crucial T705 drives. This turns out to be challenging on his ASUS Z790 motherboard. He'll be posting his results soon. However, he was able to run successfully with a single T705 drive, and like you, he achieved a reduction in PGGB processing time due to the faster disk throughput. The key question going forward, with either of these approaches is: how can we scale this? One of the challenges, that several people have experienced now, is that using heterogeneous paging drives tends to run PGGB at the rate supported by the slowest drive. So, to get low PGGB completion times AND sustain this for longer tracks requires 2 or more identical fast drives. @rayon are you planning to try deploying a second P4800X? I've considered second P4800X as these are not that expensive and I could also sell one Samsung. However, I'll try with one for some time now to see better how often I hit 1h+ processing times. Btw, I'm quite sure that Windows paging is using 4kb with QD 16, ie. 64kb blocks internally. I found this figure somewhere and the performance numbers I've seen seem to match that assumption. When writing, it's really fast, but it's slow to read those 64kb randoms (in comparison to sequential). The bus speed seems to never be the limiting factor as the drive I/O saturates much earlier with these low QDs. And I'm guessing that P4800X is faster than Samsung due to some benefits from lower latency as total 64kb random read IOPS is higher with 3x Samsung drives. Or it may be lighter for OS/CPU to deal with only one drive. austinpop 1 Link to comment
austinpop Posted May 28 Share Posted May 28 On 5/26/2024 at 5:32 PM, Schafheide said: These figures might be informative for some folk. Task: Convert a recording of Mahler Symphony #2 (82 min 27 sec - including 13 min 28 sec extras) 24-96 PCM into DSD1024 + 9th order mod + EQ. Result: approx 13 1/2 hours. That's not bad on a Mac, as you're going to DSD1024 with EQ. As it happens, I just did a Mahler 2nd from 24/96 to DSD512x1, 9th order. 6h, 38m on Windows 10, 14900K, 192GB. My Audio Setup Link to comment
rayon Posted May 28 Share Posted May 28 Some more redbooks with P4800X solo: 6min 29s: 2h 48min 17s 10min 56s: 1h 11min 13s 10min 58s: 1h 11min 37s 6min 24s: 51min 29s 9min 13s: 1h 6min 39s 8min 3s: 1h 2min 27s 7min 31s: 59min 15s 7min 53s: 59min 59s 8min 1s: 1h 2min 36s What surprised me: processing time between 7-11min tracks didn't grow linearly, but much less than that. My educated guess is that with longer tracks those output blocks are bigger and thus faster to read for virtual memory. It may be that as the number of blocks is static, the CPU spends roughly equal amount of time waiting for those. This theory would be aligned with the fact that when using "Reduce contention", which reduces the number of blocks, things process much faster. Thus it may be that the way to make this scale further is to use really big blocks. With 1024fs for redbook it currently uses 256 blocks. It could be that things speed up even more if 128 or even 64 blocks were used. That first track is clearly an outlier. My guess is that I either didn't run the process with administrator or then I had some processes open and Windows put them into background after a while (these are listed in processing order). jpizzle 1 Link to comment
seeteeyou Posted May 28 Share Posted May 28 https://www.remastero.com/pggb.html Quote On windows, if you see your CPU utilization is not above 70%, you will have to run PGGB as a administrator so Windows provides PGGB with a higher priority. Quote Provides a command line interface (available separately on request) That means we could initiate an instance of PGGB from another computer remotely with whatever priority / local account we want, while that PGGB machine might actually remain logged off (i.e. very little / nothing unnecessary is running) then? https://www.nirsoft.net/utils/advancedrun-x64.zip https://www.nirsoft.net/utils/advanced_run.html Quote Run a program in high priority. Run a program on remote computer by using a temporary service (Requires full admin access on the remote machine) Scheduling Priorities https://learn.microsoft.com/en-us/windows/win32/procthread/scheduling-priorities Thread Priorities in Windows https://scorpiosoftware.net/2023/07/14/thread-priorities-in-windows/ Thread priorities in Process Explorer I/O Prioritization in Windows OS https://clightning.medium.com/i-o-prioritization-in-windows-os-6a0637874a52 Link to comment
seeteeyou Posted May 29 Share Posted May 29 On 4/28/2024 at 1:06 AM, rayon said: This means that PGGB does less roundtrips when fetching those outputs before modulation and that has been my bottleneck. Could you please try similar length track with redbook? That triples the time for me. However! I have an update. I today spent considerable time fiddling with my BIOS. The single biggest improvement (by far) came when I disabled hyperthreading and my performance more than doubled during the latter phase. (Follow-up questions after about a month.) Is a single Optane DC P4800X (with MUCH better latency on top of providing consistent performance throughout the entire task) still behaving similarly? In other words, is it still better off to go for that rather counter-intuitive BIOS option (i.e. disabled hyperthreading) so that we're able to keep the same level of performance boost? More importantly, what's the delta between enabling and disabling hyperthreading since you've got access to 3D XPoint instead of NAND flash for caching purposes? Link to comment
austinpop Posted May 29 Share Posted May 29 @rayon Did not realize that you might still be running non-hyperthreaded. Look forward to clarification. I run with HT on, and PGGB Auto (32) workers. Meanwhile, at least when your paging P4800X disk is not saturated, do also try to run with "Reduce Contention" OFF, or unchecked. On my system with 3 NVMe paging drives, RC OFF is significantly faster. Example 1: 24/96, 22m 44s duration RC ON: 2 hrs 26 mins 50.7102 secs RC OFF: 1 hrs 52 mins 43.4198 secs Example 2: 24/96, 9m 48.7s duration RC ON: 36 mins 45.6702 secs RC OFF: 18 mins 57.9517 secs For general readers, Reduce Contention ON is still the best setting if you have a single paging file on a single SSD. However, for those who have provisioned more disks to increase I/O bandwidth, RC OFF can provide additional speed up, but it demands more bandwidth from the disk(s), which the disk(s) can now handle. My Audio Setup Link to comment
austinpop Posted June 13 Share Posted June 13 I thought I'd mention a couple of new gotchas to be aware of. Mostly this is for extreme tweakers, but if you notice a drop in performance of your PGGB runs, these are potential culprits. First, if you've been following the saga of instability in Intel 13th and 14th gen CPUs, Intel has compelled BIOS vendors to release updates with Intel-specified power profiles. This is in principle a good thing, but apparently ASUS has managed to muck it up anyway. Not sure about MSI and ASRock. Anyway, if prompted to update your BIOS, be aware that you may get one of these new profiles that could affect your performance. Here is a screenshot of a popular YouTube channel illustrating ASUS's idiocy: What I would recommend, before applying a BIOS update, go into your BIOS and record the current values of: PL1 PL2 CEP (current excursion protection) ICCMax and then see what you get after the update. Any changes could correlate to reduced performance. My Audio Setup Link to comment
austinpop Posted June 13 Share Posted June 13 The second gotcha is something called VBS (Virtualization Based Security). I recently moved my PGGB 14900K/192GB machine up from W10 Enterprise to W11 Pro. As I was setting things up, I found Intel's XTU (eXtreme Tuning Utility) would not run, complaining that VBS was running, and it required certain things in the BIOS (undervolt protection) to be enabled. That led me to: https://www.tomshardware.com/how-to/disable-vbs-windows-11 Disabling VBS allowed me to run XTU again, but the claimed performance loss due to VBS was interesting. I have not run tests with and without VBS, but if you are running Windows 11, do check to see if you have VBS enabled. If you do, turn it off and see how much of a speedup, if any, you get with PGGB. My Audio Setup Link to comment
seeteeyou Posted June 13 Share Posted June 13 Windows 11 24H2 will enable BitLocker encryption for everyone — happens on both clean installs and reinstalls https://www.tomshardware.com/software/windows/windows-11-24h2-will-enable-bitlocker-encryption-for-everyone-happens-on-both-clean-installs-and-reinstalls Quote Regardless, any Windows 11 version that has BitLocker functionality will now automatically have that activated/reactivated during reinstallations starting with 24H2. This behavior applies to clean installs of Windows 11 24H2 and system upgrades to version 24H2. Systems that upgrade to Windows 11 24H2 automatically have the Device Encryption flag turned on, but it only takes effect (for some reason) once Windows 11 24H2 is reinstalled on the machine. Not only is the C: drive encrypted, but all other drives connected to the machine will be encrypted as well during reinstallation. That would be "lots of fun" if you ask me, just imagine the power of Murphy's Law while ending up unable to that BitLocker recovery key anywhere down the road. OTOH, let's check this out https://forums.mydigitallife.net/threads/discussion-windows-11-enterprise-iot-enterprise-n-ltsc-2024-24h2-26100-x.88280/page-19#post-1837912 26100.1.240331-1435.ge_release_CLIENT_ENTERPRISES_OEM_x64FRE_en-us.iso https://files.rg-adguard.net/file/f183276a-4696-ee94-ad61-1a0de1d96c80 ISO image linked above would contain all 3 options, Automatic Device Encryption is disabled by default with either #2 or #3. Link to comment
seeteeyou Posted July 31 Share Posted July 31 13th and 14th Gen Intel CPU instability also hits servers — W680 boards with Core i9 K-series chips are crashing https://www.tomshardware.com/pc-components/cpus/13th-and-14th-gen-intel-cpu-instability-also-hits-servers Game publisher claims 100% crash rate with Intel CPUs – Alderon Games says company sells defective 13th and 14th gen chips https://www.tomshardware.com/pc-components/cpus/game-publisher-claims-100-crash-rate-with-intel-cpus-alderon-games-says-company-sells-defective-13th-and-14th-gen-chips Dev reports Intel's laptop CPUs are also suffering from crashing issues — several laptops have suffered similar failures in testing https://www.tomshardware.com/pc-components/cpus/dev-reports-that-intels-laptop-cpus-are-also-crashing-several-laptops-have-suffered-similar-crashes-in-testing Intel says 13th and 14th Gen mobile CPUs are crashing, but not due to the same bug as desktop chips — chipmaker blames common software and hardware issues https://www.tomshardware.com/pc-components/cpus/intel-says-13th-and-14th-gen-mobile-cpus-are-crashing-but-not-due-to-the-same-bug-as-desktop-chips-chipmaker-blames-common-software-and-hardware-issues Intel finally announces a solution for CPU crashing and instability problems — claims elevated voltages are the root cause; patch coming by mid-August [Updated] https://www.tomshardware.com/pc-components/cpus/intel-finally-announces-a-solution-for-cpu-crashing-errors-claims-elevated-voltages-are-the-root-cause-fix-coming-by-mid-august Leaked internal reports allegedly reveal Intel's instability problems are not over — elevated voltages could be only one of the causes of CPU crashing https://www.tomshardware.com/pc-components/cpus/leaked-internal-reports-allegedly-reveal-intels-instability-problems-are-not-over-elevated-voltages-could-be-only-one-of-the-causes-of-cpu-crashing Intel 13th Gen CPUs allegedly have 4X higher return rate than the prior gen — retailer stats also claim Intel CPU RMAs are higher than AMD https://www.tomshardware.com/pc-components/cpus/intel-13th-gen-cpus-allegedly-have-4x-higher-return-rate-than-the-prior-gen Intel's CPU instability and crashing issues also impact mainstream 65W and higher 'non-K' models — damage is irreversible, no planned recall https://www.tomshardware.com/pc-components/cpus/intel-cpu-instability-crashing-bug-includes-65w-and-higher-skus-intel-says-damage-is-irreversible-no-planned-recall Link to comment
seeteeyou Posted August 4 Share Posted August 4 Intel to lay off more than 15% of workforce — 15,000 or more employees — encountered Meteor Lake yield issues, suspends dividend https://www.tomshardware.com/pc-components/cpus/intel-to-layoff-more-than-15-of-workforce-almost-20000-employees-encountered-meteor-lake-yield-issues-suspends-dividend Intel loses $1.6 billion as data center CPU and foundry divisions struggle https://www.tomshardware.com/pc-components/cpus/intel-loses-dollar16-billion-as-data-center-cpus-and-foundry-struggles Intel's stock drops 30% overnight —company sheds $39 billion in market cap https://www.tomshardware.com/pc-components/cpus/intels-stock-drops-30-overnight-company-sheds-dollar39-billion-in-market-cap Oh well, at least E1.S flavor of 400GB Optane P5801X seemed to get so much cheaper these days. Link to comment
Popular Post austinpop Posted August 11 Popular Post Share Posted August 11 Since things are kind of quiet, I thought I'd update the group with some experiments I've been doing to further speed up my PGGB runs. Today's focus is on the "Reduce Contention" flag. On the face of it, it sounds like motherhood and apple pie. Why on earth wouldn't you reduce contention? We all want peace on earth, don't we? 😏 Sure, but on PGGB, this flag actually represents a choice between two algorithmic paths, one with the potential to be faster, but that requires more I/O operations, while the other minimizes I/O (i.e. reduces contention on the paging file), but could run slower if I/O is not a bottleneck. I should say at this point that if you're running on a Mac, then always keep this flag checked. Also in Windows, if you have a single pagefile on an SSD, then you should also not mess with this. This post is targeted to folks who have heftier machines (like my i9-14900k with pagefiles on more than 1 NVMe SSDs to absorb very high amounts of I/O. It also turns out the highest amount of I/O is generated when processing Redbook input files as it requires the highest upsampling ratio -- 512x for DSD512, and 1024x for DSD1024. If you look at the pggb_album_analysis_plus_v1.csv file in your output directory, you will see this reflected in the Blocks column. I tested tracks of durations of roughly 4, 8,16,24, and 32 mins with and without the Reduce Contention flag to see if there was an inflection point. Here is the data. Key Takeaways On my system, doing the DSD512 gargle blasting that works best with my DAC, I could not find an inflection point with longer track durations, where turning on the "Reduce Contention" flag would have reduced my PGGB run times. However, I did try a test where I gargle blasted a relatively short track of 6 mins to DSD1024, and here I found that turning on the Reduce Contention flag definitely did have a positive impact in lowering the PGGB processing time. My recommendations If you're on MacOS, or on Windows with a single pagefile, don't mess about. Set the Reduce Contention flag, and get on with your life. The same is likely the case if you're doing 1FS to DSD1024 upsampling, as that is a very I/O intensive case. If you're on Windows and have 2 or more paging drives, then try running both ways (On and Off) for a few test tracks that are typical durations for you, and see where you land. Please post the results of any tests that you try! Zaphod Beeblebrox, LowOrbit and kennyb123 1 1 1 My Audio Setup Link to comment
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now