Questions tagged [databricks]
Databricks is a unified platform with tools for building, deploying, sharing, and maintaining enterprise-grade data and AI solutions at scale. The Databricks Lakehouse Platform integrates with cloud storage and security in your cloud account, and manages and deploys cloud infrastructure on your behalf. Databricks is available on AWS, Azure, and GCP. Use this tag for questions related to the Databricks Lakehouse Platform.
databricks
7,833
questions
35
votes
3
answers
87k
views
Exploding nested Struct in Spark dataframe
I'm working through a Databricks example. The schema for the dataframe looks like:
> parquetDF.printSchema
root
|-- department: struct (nullable = true)
| |-- id: string (nullable = true)
| |-...
33
votes
6
answers
68k
views
How to delete all files from folder with Databricks dbutils
Can someone let me know how to use the databricks dbutils to delete all files from a folder.
I have tried the following but unfortunately, Databricks doesn't support wildcards.
dbutils.fs.rm('adl://...
31
votes
6
answers
96k
views
Databricks: Download a dbfs:/FileStore File to my Local Machine?
I am using saveAsTextFile() to store the results of a Spark job in the folder dbfs:/FileStore/my_result.
I can access to the different "part-xxxxx" files using the web browser, but I would like to ...
29
votes
8
answers
63k
views
Databricks drop a delta table?
How can I drop a Delta Table in Databricks? I can't find any information in the docs... maybe the only solution is to delete the files inside the folder 'delta' with the magic command or dbutils:
%fs ...
28
votes
3
answers
70k
views
How to list all the mount points in Azure Databricks?
I tried with this %fs ls dbfs:/mnt, but i want to know do this give me all the mount point?
27
votes
6
answers
25k
views
Databricks: Issue while creating spark data frame from pandas
I have a pandas data frame which I want to convert into spark data frame. Usually, I use the below code to create spark data frame from pandas but all of sudden I started to get the below error, I am ...
26
votes
5
answers
57k
views
Databricks: How do I get path of current notebook?
Databricks is smart and all, but how do you identify the path of your current notebook? The guide on the website does not help.
It suggests:
%scala
dbutils.notebook.getContext.notebookPath
res1: ...
25
votes
4
answers
36k
views
How to handle an AnalysisException on Spark SQL?
I am trying to execute a list of queries in Spark, but if the query does not run correctly, Spark throws me the following error:
AnalysisException: "ALTER TABLE CHANGE COLUMN is not supported for ...
25
votes
6
answers
47k
views
Azure Databricks - Can not create the managed table The associated location already exists
I have the following problem in Azure Databricks. Sometimes when I try to save a DataFrame as a managed table:
SomeData_df.write.mode('overwrite').saveAsTable("SomeData")
I get the following error:
...
25
votes
4
answers
10k
views
How to detect Databricks environment programmatically
I'm writing a spark job that needs to be runnable locally as well as on Databricks.
The code has to be slightly different in each environment (file paths) so I'm trying to find a way to detect if the ...
25
votes
5
answers
48k
views
How to load databricks package dbutils in pyspark
I was trying to run the below code in pyspark.
dbutils.widgets.text('config', '', 'config')
It was throwing me an error saying
Traceback (most recent call last):
File "<stdin>", line 1, ...
24
votes
7
answers
64k
views
How to drop a column from a Databricks Delta table?
I have recently started discovering Databricks and faced a situation where I need to drop a certain column of a delta table. When I worked with PostgreSQL it was as easy as
ALTER TABLE main....
24
votes
3
answers
28k
views
NameError: name 'dbutils' is not defined in pyspark
I am running a pyspark job in databricks cloud. I need to write some of the csv files to databricks filesystem (dbfs) as part of this job and also i need to use some of the dbutils native commands ...
23
votes
2
answers
10k
views
Apache Spark + Delta Lake concepts
I have many doubts related to Spark + Delta.
1) Databricks propose 3 layers (bronze, silver, gold), but in which layer is recommendable to use for Machine Learning and why? I suppose they propose to ...
21
votes
3
answers
33k
views
Databricks - How to change a partition of an existing Delta table?
I have a table in Databricks delta which is partitioned by transaction_date. I want to change the partition column to view_date. I tried to drop the table and then create it with a new partition ...
21
votes
1
answer
14k
views
Local instance of Databricks for development
I am currently working on a small team that is developing a Databricks based solution. For now we are small enough to work off of cloud instances of Databricks. As the group grows this will not ...
19
votes
3
answers
29k
views
lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU
I am running the following code for LSTM on Databricks with GPU
model = Sequential()
model.add(LSTM(64, activation=LeakyReLU(alpha=0.05), batch_input_shape=(1, timesteps, n_features),
stateful=...
19
votes
7
answers
56k
views
How to slice a pyspark dataframe in two row-wise
I am working in Databricks.
I have a dataframe which contains 500 rows, I would like to create two dataframes on containing 100 rows and the other containing the remaining 400 rows.
+----------------...
19
votes
4
answers
44k
views
How to move files of same extension in databricks files system?
I am facing file not found exception when i am trying to move the file with * in DBFS. Here both source and destination directories are in DBFS. I have the source file named "test_sample.csv" ...
19
votes
4
answers
28k
views
How to find size (in MB) of dataframe in pyspark?
How to find size (in MB) of dataframe in pyspark ,
df=spark.read.json("/Filestore/tables/test.json")
I want to find how the size of df or test.json
19
votes
1
answer
22k
views
Databricks SQL - How to get all the rows (more than 1000) in the first run?
Currently, in Databricks if we run the query, it always returns 1000 rows in the first run. If we need all the rows, we need to execute the query again.
In the situations where we know that we need to ...
19
votes
1
answer
2k
views
PySpark and Protobuf Deserialization UDF Problem
I'm getting this error
Can't pickle <class 'google.protobuf.pyext._message.CMessage'>: it's not found as google.protobuf.pyext._message.CMessage
when I try to create a UDF in PySpark. ...
18
votes
3
answers
79k
views
How to export data from a dataframe to a file databricks
I'm doing right now Introduction to Spark course at EdX.
Is there a possibility to save dataframes from Databricks on my computer.
I'm asking this question, because this course provides Databricks ...
18
votes
2
answers
16k
views
What does "Determining location of DBIO file fragments..." mean, and how do I speed it up?
When running simple SQL commands in Databricks, sometimes I get the message:
Determining location of DBIO file fragments. This operation can take
some time.
What does this mean, and how do I ...
17
votes
3
answers
25k
views
Printing secret value in Databricks
Even though secrets are for masking confidential information, I need to see the value of the secret for using it outside Databricks.
When I simply print the secret it shows [REDACTED].
print(dbutils....
17
votes
2
answers
96k
views
Read/Write single file in DataBricks
I have a file which contains a list of names stored in a simple text file. Each row contains one name. Now I need to pro grammatically append a new name to this file based on a users input.
For the ...
17
votes
2
answers
50k
views
How to set environment variable in databricks?
Simple question, but I can't find a simple guide on how to set the environment variable in Databricks. Also, is it important to set the environment variable on both the driver and executors (and would ...
16
votes
8
answers
131k
views
How to read xlsx or xls files as spark dataframe
Can anyone let me know without converting xlsx or xls files how can we read them as a spark dataframe
I have already tried to read with pandas and then tried to convert to spark dataframe but got ...
16
votes
3
answers
49k
views
How to rename a column in Databricks
How do you rename a column in Databricks?
The following does not work:
ALTER TABLE mySchema.myTable change COLUMN old_name new_name int
It returns the error:
ALTER TABLE CHANGE COLUMN is not ...
16
votes
4
answers
38k
views
list the files of a directory and subdirectory recursively in Databricks(DBFS)
Using python/dbutils, how to display the files of the current directory & subdirectory recursively in Databricks file system(DBFS).
16
votes
1
answer
19k
views
Use of lit() in expr()
The line:
df.withColumn("test", expr("concat(lon, lat)"))
works as expected but
df.withColumn("test", expr("concat(lon, lit(','), lat)"))
produces the following exception:
org.apache.spark.sql....
16
votes
1
answer
13k
views
Spark: Read an inputStream instead of File
I'm using SparkSQL in a Java application to do some processing on CSV files using Databricks for parsing.
The data I am processing comes from different sources (Remote URL, local file, Google Cloud ...
15
votes
1
answer
9k
views
Databricks Community Edition Cluster won't start
I am trying to start a cluster that was terminated in a Community Edition. However, whenever I click on 'start' the cluster won't start. It would appear I have to create a new cluster everytime I want ...
15
votes
3
answers
45k
views
Ways to Plot Spark Dataframe without Converting it to Pandas
Is there any way to plot information from Spark dataframe without converting the dataframe to pandas?
Did some online research but can't seem to find a way. I need to automatically save these plots ...
15
votes
2
answers
48k
views
How to write pandas dataframe into Databricks dbfs/FileStore?
I'm new to the Databricks, need help in writing a pandas dataframe into databricks local file system.
I did search in google but could not find any case similar to this, also tried the help guid ...
15
votes
1
answer
27k
views
Error running Spark on Databricks: constructor public XXX is not whitelisted
I was using Azure Databricks and trying to run some example python code from this page.
But I get this exception:
py4j.security.Py4JSecurityException: Constructor public org.apache.spark.ml....
15
votes
3
answers
61k
views
How to solve this error org.apache.spark.sql.catalyst.errors.package$TreeNodeException
I have two procesess each process do
1) connect oracle db read a specific table
2) form dataframe and process it.
3) save the df to cassandra.
If I am running both process parallelly , both try to ...
15
votes
2
answers
2k
views
Switching between Databricks Connect and local Spark environment
I am looking to use Databricks Connect for developing a pyspark pipeline. DBConnect is really awesome because I am able to run my code on the cluster where the actual data resides, so it's perfect for ...
14
votes
1
answer
34k
views
How to create a empty folder in Azure Blob from Azure databricks
I have scenario where I want to list all the folders inside a directory in Azure Blob. If no folders present create a new folder with certain name.
I am trying to list the folders using dbutils.fs.ls(...
14
votes
3
answers
31k
views
Databricks - is not empty but it's not a Delta table
I run a query on Databricks:
DROP TABLE IF EXISTS dublicates_hotels;
CREATE TABLE IF NOT EXISTS dublicates_hotels
...
I'm trying to understand why I receive the following error:
Error in SQL ...
14
votes
2
answers
28k
views
reading data from URL using spark databricks platform
trying to read data from url using spark on databricks community edition platform
i tried to use spark.read.csv and using SparkFiles but still, i am missing some simple point
url = "https://raw....
14
votes
4
answers
9k
views
How to login SSH on Azure Databricks cluster
I used the following ubuntu command to access SSH login as,
ssh user@hostname_or_IP
Can able to see Master node hostname
but not able to get the username from Azure Databricks cluster
Refer this ...
14
votes
2
answers
13k
views
How to properly access dbutils in Scala when using Databricks Connect
I'm using Databricks Connect to run code in my Azure Databricks cluster locally from IntelliJ IDEA (Scala).
Everything works fine. I can connect, debug, inspect locally in the IDE.
I created a ...
13
votes
2
answers
49k
views
Adding constant value column to spark dataframe
I am using Spark version 2.1 in Databricks. I have a data frame named wamp to which I want to add a column named region which should take the constant value NE. However, I get an error saying ...
13
votes
1
answer
20k
views
ArrowTypeError: Did not pass numpy.dtype object', 'Conversion failed for column X with type int32
Problem
I am trying to save a data frame as a parquet file on Databricks, getting the ArrowTypeError.
Databricks Runtime Version:
7.6 ML (includes Apache Spark 3.0.1, Scala 2.12)
Log Trace
...
13
votes
4
answers
42k
views
How to add a new column to a Delta Lake table?
I'm trying to add a new column to data stored as a Delta Table in Azure Blob Storage. Most of the actions being done on the data are upserts, with many updates and few new inserts. My code to write ...
13
votes
3
answers
26k
views
Python Version in Azure Databricks
I am trying to find out the python version I am using in Databricks.
To find out I tried
import sys
print(sys.version)
And I got the output as 3.7.3
However when I went to Cluster --> SparkUI --> ...
13
votes
5
answers
9k
views
Delta Lake rollback
Need an elegant way to rollback Delta Lake to a previous version.
My current approach is listed below:
import io.delta.tables._
val deltaTable = DeltaTable.forPath(spark, testFolder)
spark.read....
13
votes
4
answers
27k
views
List databricks secret scope and find referred keyvault in azure databricks
How can we find existing secret scopes in databricks workspace. And which keyvault is referred by specific SecretScope in Azure Databricks?
13
votes
1
answer
33k
views
How can I convert a pyspark.sql.dataframe.DataFrame back to a sql table in databricks notebook
I created a dataframe of type pyspark.sql.dataframe.DataFrame by executing the following line:
dataframe = sqlContext.sql("select * from my_data_table")
How can I convert this back to a sparksql ...