Start a Conversation

Unsolved

C

1 Rookie

 • 

4 Posts

64

May 7th, 2024 12:49

XPS 8960, getting the best from GeForce RTX 4080?

XPS 8960

XPS 8960

I have just bought an XPS 8960 primarily for the NVIDIA GeForce RTX 4080 16GB GDDR6X graphics card to enable me to run Python applications but I’m a novice! I didn’t upgrade my monitor and have connected it via HDMI to one of the four general holes. When Python runs, it uses nearly 100% cpu and is very slow. I have noticed there is also Intel UHD Graphics 770 running. What do I need to do to get the best out of this system, please? Anyone able to help, please treat me like a novice with no knowledge whatsoever, thank you!!

10 Elder

 • 

43.7K Posts

May 7th, 2024 18:10

For starters... which CPU and how much system RAM does your PC have...? 

Any signs PC is overheating and throttling the speed of the CPU so it can cool off?  Which CPU heat sink do you have,  65W air cooling; 125W air; 125W liquid cooling? Do you need a heat sink + fan upgrade?

Is Windows power plan set for Best Performance, rather than Best Energy Savings ?

Have you opened the NVidia Control Panel, clicked Manage 3D settings and then scrolled down right panel to Power Management and changed it to "prefer max performance" ?

Have you set up the preferences on Windows Graphics settings screen for Python to use NVidia GPU?

(edited)

1 Rookie

 • 

4 Posts

May 8th, 2024 05:31

Thank you @RoHe , this is what I know....

13th Gen Intel® Core™ i9-13900K processor (24-Core, 32MB Cache, 3.0 GHz to 5.4GHz) and 32GB DDR5, 2x16GB, at 4800MHz

Performance CPU liquid cooling no signs of overheating!

I've looked at the power plan and can't find "best performance", all I could do was make sure unit never went to sleep. No BP in NVidia Control Panel either.

Would I be right in thinking connecting the screen to an ordinary HDMI port is wrong and I should connect it to the card itself with a DisplayPort Cable? Does the age of the screen have anything to do with performance?
Thanks for taking an interest




4 Operator

 • 

1.8K Posts

May 8th, 2024 11:04

@ChrisMaddrell​ 

First question I have is why you think you are not getting the best out of the card? How have you determined that and the 100% CPU usage?

Is the Python app. a game? Is the game actually using the Nvidia card?

If you go to the Windows Settings (Windows key+I) and select SYSTEM, DISPLAY, GRAPHICS, you'll see a list of programs. You should find the Python program there on the list. Click on it, and on the Options, make sure you are using the Nvidia card.

Also, run GeForce, and it should find the program as well. There IS and Optimize button on it, see if using that helps?

I assume you installed Python correctly, but check this link as well, https://learn.microsoft.com/en-us/windows/python/beginners.

1 Rookie

 • 

4 Posts

May 8th, 2024 12:02

Thank you @ispalten, I’ll answer your questions ASAP. Please could you clarify one thing for me, do I need to connect my screen to the graphics card with a display port cable? Thanks 

4 Operator

 • 

1.8K Posts

May 8th, 2024 14:50

@ChrisMaddrell

Thank you @ispalten, I’ll answer your questions ASAP. Please could you clarify one thing for me, do I need to connect my screen to the graphics card with a display port cable? Thanks 

No, either should work? I think though, if you want sound to go to the monitor, HDMI is needed? I could be wrong on that though.

I have an RTX2060 6GB card and a Dell 32in S3221QS monitor connected via HDMI and the sound is on the monitor for now.

I run MS Flight Simulator, and get all the 'speed' I need out of the card. 3840x2160 resolution and have no problems driving it.

Check this link --> https://www.tomshardware.com/features/displayport-vs-hdmi-better-for-gaming and at the d be more concerned over the 100 CPU usage? How are you measuring that? Open the Task Manager and on the Performance tab on the left you can see and choose any device to see it?

2 Intern

 • 

231 Posts

May 8th, 2024 15:25

@ChrisMaddrell That is because by default python will use the CPU only when compiling, which is CPU intensive.

In Python:

1. In the Optional Features window, select Documentationpiptd/tk and IDLE, and Python test suite. Click Next. Install these features into Python.

Secondly, you need to download  Distribution | Anaconda

  1. In the Advanced Installation Options, you can choose how you want to install Anaconda. You can tick everything except the second box. You can also leave the options open and add them later on when you need them. Click Install.

  You will also need the CUDA Toolkit

  1. Go to the CUDA Toolkit page on the Nvidia website. Click Download now.
  2. Select your Operating SystemArchitecture (only x86 is compatible), Windows Version, and Installer Type. Then click on Download
  3.  Choose your install Path
  4. Agree to the Terms of Service. Then, you will need to choose your Installation options.
  5. Select Express Installation and click NEXT.
  6. After the installation has finished, you can click on CLOSE.

You can install Numba using a command in Conda. In Anaconda, the Numba, Numpy, and llvmlite packages will already be installed by default, but in Conda, you must install them through the command prompt.

  1. Using the pip install numba command on Conda will install Numba and add relevant packages to the Conda directory. 
  2. Using the nvcc --version command, you can verify the CUDA Toolkit installation. Using the pip show number command, you can verify the Numba installation.

After you have installed all these programs, you be able to use your GPU for parallel computing. To start, you will need to import a JIT function from Numba to CUDA. Essentially, you are transferring the command from your CPU to your GPU so that your GPU can run the function and send the result back to your CPU for analysis.

In Python scripts, you want to run on the GPU, (target_backend='cuda').

Hope this helps.

(edited)

2 Intern

 • 

231 Posts

May 8th, 2024 15:28

@ChrisMaddrell​ You would want to, but if you are just using the 4080 for computational work, you could dummy plug the GPU and use the onboard intel GPU. 

You can plug your HDMI cable into the HDMI slot on the Nvidia GPU, or your Displayport cable. Either one is fine.

2 Intern

 • 

231 Posts

May 8th, 2024 15:29

@ispalten​ He wasn't by default Python will run on the CPU only, there are a few extra steps to get Python running for GPU parallel processing. 

2 Intern

 • 

231 Posts

May 8th, 2024 15:30

@RoHe​ Python requires extra steps for GPU Parallel processing, his PC wasn't overheating it was just using the CPU only for workloads that were better off sent to the GPU. 

1 Rookie

 • 

4 Posts

May 8th, 2024 16:00

I'm using Python in a graphics package, Krita. Here's what's happening in task manager. As a novice (I specialise in art, not computing!) this is all Greek to me but I'm very glad of the help, thanks.

2 Intern

 • 

231 Posts

May 8th, 2024 17:06

@ChrisMaddrell​ without looking at your code, your code has to use the GPU flags, if not it will default to the CPU hence why you are getting 100% CPU utilization.  

In Python if you want to run on the GPU, (target_backend='cuda').

(edited)

10 Elder

 • 

43.7K Posts

May 9th, 2024 19:30

Looks like you're getting good help with Python and CPU usage.

Win 11 Best Performance settings

No Events found!

Top