Couldn't run pyspark on windows cmd and conda cmd

I have installed spark on my local system and it is working fine. However, I am not able to run pyspark in my system. Also, I did check the path and all details but nothing seems to work. Please help me with the issue.

While running in conda cmd-

    Python 3.7.4 (default, Aug  9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32

    Warning:
    This Python interpreter is in a conda environment, but the environment has
    not been activated.  Libraries may fail to load.  To activate this environment
    please see https://conda.io/activation
    
    Type "help", "copyright", "credits" or "license" for more information.
    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    22/01/19 20:15:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    22/01/19 20:15:05 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext should be running in this JVM (see SPARK-2243). The other SparkContext was created at:
    org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
    java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
    java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
    java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
    py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
    py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    py4j.Gateway.invoke(Gateway.java:238)
    py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
    py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
    py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
    py4j.ClientServerConnection.run(ClientServerConnection.java:106)
    java.base/java.lang.Thread.run(Thread.java:833)
    C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\shell.py:42: UserWarning: Failed to initialize Spark session.
      warnings.warn("Failed to initialize Spark session.")
    Traceback (most recent call last):
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\shell.py", line 38, in <module>
        spark = SparkSession._create_shell_session()  # type: ignore
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\sql\session.py", line 553, in _create_shell_session
        return SparkSession.builder.getOrCreate()
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\sql\session.py", line 228, in getOrCreate
        sc = SparkContext.getOrCreate(sparkConf)
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 392, in getOrCreate
        SparkContext(conf=conf or SparkConf())
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 147, in __init__
        conf, jsc, profiler_cls)
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 209, in _do_init
        self._jsc = jsc or self._initialize_context(self._conf._jconf)
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 329, in _initialize_context
        return self._jvm.JavaSparkContext(jconf)
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\lib\py4j-0.10.9.2-src.zip\py4j\java_gateway.py", line 1574, in __call__
        answer, self._gateway_client, None, self._fqn)
      File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\lib\py4j-0.10.9.2-src.zip\py4j\protocol.py", line 328, in get_return_value
        format(target_id, ".", name), value)
    py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
    : java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.storage.StorageUtils$
            at org.apache.spark.storage.BlockManagerMasterEndpoint.<init>(BlockManagerMasterEndpoint.scala:110)
            at org.apache.spark.SparkEnv$.$anonfun$create$9(SparkEnv.scala:348)
            at org.apache.spark.SparkEnv$.registerOrLookupEndpoint$1(SparkEnv.scala:287)
            at org.apache.spark.SparkEnv$.create(SparkEnv.scala:336)
            at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:191)
            at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
            at org.apache.spark.SparkContext.<init>(SparkContext.scala:460)
            at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
            at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
            at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
            at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
            at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
            at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
            at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
            at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
            at py4j.Gateway.invoke(Gateway.java:238)
            at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
            at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
            at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
            at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
            at java.base/java.lang.Thread.run(Thread.java:833)

While running on cmd

Python 3.7.4 (default, Aug  9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32

Warning:
This Python interpreter is in a conda environment, but the environment has
not been activated.  Libraries may fail to load.  To activate this environment
please see https://conda.io/activation

Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
22/01/20 14:32:41 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/01/20 14:32:42 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor). This may indicate an error, since only one SparkContext should be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
py4j.Gateway.invoke(Gateway.java:238)
py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
py4j.ClientServerConnection.run(ClientServerConnection.java:106)
java.base/java.lang.Thread.run(Thread.java:833)
C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\shell.py:42: UserWarning: Failed to initialize Spark session.
  warnings.warn("Failed to initialize Spark session.")
Traceback (most recent call last):
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\shell.py", line 38, in <module>
    spark = SparkSession._create_shell_session()  # type: ignore
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\sql\session.py", line 553, in _create_shell_session
    return SparkSession.builder.getOrCreate()
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\sql\session.py", line 228, in getOrCreate
    sc = SparkContext.getOrCreate(sparkConf)
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 392, in getOrCreate
    SparkContext(conf=conf or SparkConf())
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 147, in __init__
    conf, jsc, profiler_cls)
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 209, in _do_init
    self._jsc = jsc or self._initialize_context(self._conf._jconf)
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\pyspark\context.py", line 329, in _initialize_context
    return self._jvm.JavaSparkContext(jconf)
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\lib\py4j-0.10.9.2-src.zip\py4j\java_gateway.py", line 1574, in __call__
    answer, self._gateway_client, None, self._fqn)
  File "C:\Spark\spark-3.2.0-bin-hadoop3.2\python\lib\py4j-0.10.9.2-src.zip\py4j\protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: java.lang.NoClassDefFoundError: Could not initialize class org.apache.spark.storage.StorageUtils$
        at org.apache.spark.storage.BlockManagerMasterEndpoint.<init>(BlockManagerMasterEndpoint.scala:110)
        at org.apache.spark.SparkEnv$.$anonfun$create$9(SparkEnv.scala:348)
        at org.apache.spark.SparkEnv$.registerOrLookupEndpoint$1(SparkEnv.scala:287)
        at org.apache.spark.SparkEnv$.create(SparkEnv.scala:336)
        at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:191)
        at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:277)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:460)
        at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:77)
        at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:499)
        at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:480)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:238)
        at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
        at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
        at java.base/java.lang.Thread.run(Thread.java:833)

Any solution to this problem? Also, while closing spark (:quit) it is not able to close properly. Shows this error ShutdownHookManager: Exception while deleting Spark temp dir. java.io.IOException: Failed to delete:


To activate an environment: conda activate myenv

Also,

If you receive this warning, you need to activate your environment. To do so on Windows, run: c:\Anaconda3\Scripts\activate base in Anaconda Prompt.

Windows is extremely sensitive to proper activation. This is because the Windows library loader does not support the concept of libraries and executables that know where to search for their dependencies (RPATH). Instead, Windows relies on a dynamic-link library search order.