All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Growing a bit exasperated with the issue that Im facing while integrating Splunk with Duo admin api, seeing the following error right form the get go during the initial configuration. EOF occured in ... See more...
Growing a bit exasperated with the issue that Im facing while integrating Splunk with Duo admin api, seeing the following error right form the get go during the initial configuration. EOF occured in violation of protocol (_ssl. c:1106). I have not seen it before and its even stranger as there is no connectivity issues, a curl to my api host shows connectivity is fine, no problem there. TLS handshake is successful. TCP dump shows that it was able to reach out to Duo cloud's IP,  here's a screenshot of the error preventing me from proceeding   The error is happening at intial setup, its so hard to determine why with no information or logs to go off... is anyone familiar with this?
Upgraded universal splunk universal forwarder from 9.0.2 to 9.1.0.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the de... See more...
Upgraded universal splunk universal forwarder from 9.0.2 to 9.1.0.  ./splunk list monitor gives me the following error with default password : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." for the first time. ./splunk edit user admin -password <newpassword> -auth admin:changeme tried above command to reset default password: still gives me : "Remote login has been disabled for 'admin' with the default password. Either set the password, or override by changing the 'allowRemoteLogin' setting in your server.conf file." Looking for any answers.
Looking for recommendations for automating the Splunk version upgrade process for a clustered (indexer & search head cluster) deployment. I'm curious if I can consolidate the upgrade process into a c... See more...
Looking for recommendations for automating the Splunk version upgrade process for a clustered (indexer & search head cluster) deployment. I'm curious if I can consolidate the upgrade process into a centrally automated solution.   Details Windows server based environment indexer cluster search cluster multisite deployment server license/MC server   Thanks in advance!
Hi, I'm not able to integrate SPlunk with Nozomi, with the available app (Nozomi Networks Universal Add-on), on the other hand I've tested the legacy addon and receive the alerys/assets but not with ... See more...
Hi, I'm not able to integrate SPlunk with Nozomi, with the available app (Nozomi Networks Universal Add-on), on the other hand I've tested the legacy addon and receive the alerys/assets but not with full info. The server (Nozomi Guardian) is self-signed. After configuring the latest version and setting up the inputs for receiving alerts, asset etc. There's no data being received in the index, and from the splunk logs I see the following:   06-13-2024 21:23:01.529 +0200 ERROR ExecProcessor [3854374 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA-nozomi-networks-universal-add-on/bin/universal_session.py" HTTPSConnectionPool(host='192.168.1.4', port=443): Max retries exceeded with url: /api/open/sign_in (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1106)'))) I tought the solution could be by just disabling the ssl verification, but then why the legacy addon is working fine but the new version is not? In case I need to disable SSL verification, would like to know where is the right file and parameter.   thank you,  
So I have Splunk Cloud, but we still use a Heavy Forwarder, Universal Forwarder and a Deployment server. The UF server has definitely come into hand for grabbing local data. However, I'm not sure wha... See more...
So I have Splunk Cloud, but we still use a Heavy Forwarder, Universal Forwarder and a Deployment server. The UF server has definitely come into hand for grabbing local data. However, I'm not sure what the Deployment server is for. We do use the Heavy Forwarder for various things.  Does anyone have documentation of what is necessary and what is a nicety? And do they have knowledge on the specs needed? 
I have data with two fields that share a static range of 10 values.  I'd like to show a column chart with the buckets on the X axis and two bars in each bucket, one for field A, the other for field B... See more...
I have data with two fields that share a static range of 10 values.  I'd like to show a column chart with the buckets on the X axis and two bars in each bucket, one for field A, the other for field B. This doesn't work: index=foo message="bar" | stats count as "Field A" by A | append [ search index=foo message="bar" | stats count as "Field B" by B ]  I'm sure I'm missing something obvious ... To reiterate, fields A and B are present in all events returned and share the same "buckets".  Call them strings like "Group 1", "Group 2", etc.  So A="Group 3" and B="Group 6" could be in the same event and in the chart I should have a count added for Groups 3 for the Field A column and Group 6 for the Field B column. Thanks!
I have UFs installed on some sql servers that forward certain events (according to eventID) to my Splunk.  I have created a search query to parse out the data need to make a nice table. However, ide... See more...
I have UFs installed on some sql servers that forward certain events (according to eventID) to my Splunk.  I have created a search query to parse out the data need to make a nice table. However, ideally I'd like to do this at ingest time instead of at search.  I was told by my manager to research props.conf and transforms.conf and here I am Not sure if that is the proper route or if there are other suggestions. Thank you.    index="wineventlog" | rex field=EventData_Xml (server_principal_name:(?<server_principal_name>\S+)) | rex field=EventData_Xml (server_instance_name:(?<server_instance_name>\S+)) | rex field=EventData_Xml (action_id:(?<action_id>\S+)) | rex field=EventData_Xml (succeeded:(?<succeeded>\S+)) | table _time, action_id, succeeded, server_principal_name, server_instance_name
Splunk Enterprise 9.0.6 and building a summary index of sourcenumbers (count) and distinct destinations called (dc(destinationnumber)) When i run this: ... | stats count dc(destinationnumber)... See more...
Splunk Enterprise 9.0.6 and building a summary index of sourcenumbers (count) and distinct destinations called (dc(destinationnumber)) When i run this: ... | stats count dc(destinationnumber) by sourcenumber I get something like sourcenumber,count,dc(destinationnumber) +15551234567,10,8 indicating it called 10 times to 8 different numbers. adsf perfect. But with this: ... | sistats count dc(destinationnumber) by sourcenumber   i get: psrsvd_ct_destinationnumber,psrsvd_gc,psrsvd_v, psrsvd_vm_destinationnumber 10,10,1,+19991234567;2,+18881234567;2,+17771234567;1,+15551234567;1 (etc) Found no clear help in the sistats page and other posts like this one it seems to work (though older posts and not using count) Best guess is that vm column 'preserves' the details, but idk why dc() isn't working like I expect.
Hey, I am setting up a Splunk Dev env. I have one indexer, one SH, and one forwarder. I have uninstalled and reinstalled the Dev Indexer. I am trying to set it up to use two different filesystems as ... See more...
Hey, I am setting up a Splunk Dev env. I have one indexer, one SH, and one forwarder. I have uninstalled and reinstalled the Dev Indexer. I am trying to set it up to use two different filesystems as cold/hot data.  The error im receiving when i restart Splunk is     Problem parsing indexes.conf: Cannot load IndexConfig: Cannot create index '_audit': path of homePath must be absolute ('$SPLUNK_HOME/data/audit/db') Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue       Im not sure how to set this up correctly. I reinstalled the indexer so i could fix the mounts and storage.  For the /export/opt/splunk/etc/system.local/indexes.conf, i have something like:     [default] homePath = $SPLUNK_DB/hot/$_index_name/db coldPath = $SPLUNK_DB/cold/$_index_name/colddb       For my Splunk_DB, I have tried to set it in the Splunk-Launch.conf, as shown below:     # Version 9.2.0.1 # Modify the following line to suit the location of your Splunk install. # If unset, Splunk will use the parent of the directory containing the splunk # CLI executable. # SPLUNK_HOME=/export/opt/splunk/ # By default, Splunk stores its indexes under SPLUNK_HOME in the # var/lib/splunk subdirectory. This can be overridden # here: # SPLUNK_DB=$SPLUNK_HOME/data/ # Splunkd daemon name SPLUNK_SERVER_NAME=Splunkd # If SPLUNK_OS_USER is set, then Splunk service will only start # if the 'splunk [re]start [splunkd]' command is invoked by a user who # is, or can effectively become via setuid(2), $SPLUNK_OS_USER. # (This setting can be specified as username or as UID.) # # SPLUNK_OS_USER PYTHONHTTPSVERIFY=0 PYTHONUTF8=1 ENABLE_CPUSHARES=true    
I'm trying to set the Description field of a ServiceNow Incident ticket through Splunk, and the string I'm passing contains a newline (\n).  But when Splunk creates/updates the ticket, either through... See more...
I'm trying to set the Description field of a ServiceNow Incident ticket through Splunk, and the string I'm passing contains a newline (\n).  But when Splunk creates/updates the ticket, either through the snowincident command or an action alert, it will automatically escape the backslash character.   So after Splunk passes the info to snow, the underlying json of the ticket looks like this: {"description":"this is a \\n new line"} and my Description field looks like this: this is a \n new line Is this something that Splunk is doing, or the ServiceNow Add-On?  Does anyone know of a way to get around this?
Current query,  this shows the how many successful login attempts there have been. index=abc granttype=mobile | fields subjectid, message | search message="*Token Success*" | stats count I am... See more...
Current query,  this shows the how many successful login attempts there have been. index=abc granttype=mobile | fields subjectid, message | search message="*Token Success*" | stats count I am now looking to create a panel to show the daily average amount of successful login attempts across 7 days. Is anyone able to help me with  query please?     
Hello! I have a dashboard with several visualization panels. One of these is linked to a search that pulls the Top 10 Source IPs by Log Activity. index="index_name" $token.source.address$ |fields so... See more...
Hello! I have a dashboard with several visualization panels. One of these is linked to a search that pulls the Top 10 Source IPs by Log Activity. index="index_name" $token.source.address$ |fields source_address |stats count by source_address |table source_address, count |rename source_address as "Source IP", count as "Count" |sort -Count | head 10   The token, $token.source.address$, is set by a text box on the dashboard for the bar visualization below. However, in addition to the correct value being shown, there are often other incorrect values shown as well.   There doesn't seem to be a pattern as to why this happens? Does anyone know why this may happen and how to correct it? Thanks!
Hi everyone, I was wondering if anyone had any suggestions on effective ways of pulling application data from Splunk Cloud into PowerBI Platform without using the Splunk ODBC driver? Our Business In... See more...
Hi everyone, I was wondering if anyone had any suggestions on effective ways of pulling application data from Splunk Cloud into PowerBI Platform without using the Splunk ODBC driver? Our Business Intelligence team is keen on enriching their data by integrating Splunk with PowerBI. We're aiming to ensure that this integration follows best practices and is both efficient and reliable.   Has anyone here successfully implemented this kind of integration? If so, could you share the approach you took, the tools or connectors you used, and any tips or challenges you encountered? Thanks in advance for your help! Patrick #powerbi #odbc #splunk #businessintelligence
Hi community, can anyone help me figure out the log which Get incorrect data after Update(both get and update will log the request and response). In my case, the data can be updated multiple times. I... See more...
Hi community, can anyone help me figure out the log which Get incorrect data after Update(both get and update will log the request and response). In my case, the data can be updated multiple times. I need to guarantee all the Get can get correct data. For example, there are 5 rows log: 1. Update A = 5, 2. Get A = 5, 3. Get A = 6, 4. Update A = 6,  5. Get A = 6; These logs are sorted based on time. Obviously the result obtained in the third row is incorrect, it should return A = 5. The sample data is like: id value time operation 124945912 FALSE 1718280482 get 124945938 FALSE 1718280373 get 124945938 FALSE 1718280373 update 124945938 null 1718280363 get 124945937 FALSE 1718280348 get 124945937 FALSE 1718280348 update 124945937 null 1718280337 get 124945936 FALSE 1718280330 get 124945936 FALSE 1718280330 update   Both id=124945937 and id=124945936 are correct since the obtained value after Update operation is same as Update value(false) even though the previous obtained value(null) which is before Update operation does not equal to Update value. Can ignore the Get operation if there is no Update operation before. Can anyone help? Thanks in advance^^
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splu... See more...
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splunk app I am using:  Splunk https://splunkbase.splunk.com/app/5518    So basically I'd like to do concatenation between DeviceProcess and DeviceRegistry events in advanced hunting query | advhunt in splunk SPL. Is there a suitable Splunk query for this kind of purpose?
Hi I am getting a log feed for a transactional system. Each log entry has a status either End, Begin or something in between (but for this I don't care about the in between) and a UUID to mark that ... See more...
Hi I am getting a log feed for a transactional system. Each log entry has a status either End, Begin or something in between (but for this I don't care about the in between) and a UUID to mark that they belong to the same transaction. I am struggling to write a search query that essentially subtracts the _time from the BEGIN entry ud UUID123, from the _time from the END entry with the same UUID. Obviously, my goal is to get the time it took the transaction to complete but I am not sure how to compare fields in two entries with the same UUID. Any ideas ? Thanks
following query yields no results: index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | where eventStartsFrom <= now() and eventEndsAt >= now() but index=shared... See more...
following query yields no results: index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | where eventStartsFrom <= now() and eventEndsAt >= now() but index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | where eventStartsFrom <= now() and index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | where eventEndsAt >= now() both works individually. All comparisons are made against epoch date format. Can someone help me understand as what mistake I am doing here.
The purpose of this query is to create legacy diagrams of how the search head works in Splunk. I want to know the internal flow of the search head so anyone can use it in a future LLD or flow diagram... See more...
The purpose of this query is to create legacy diagrams of how the search head works in Splunk. I want to know the internal flow of the search head so anyone can use it in a future LLD or flow diagram. 
Hi Team,   I am trying to put conversion of transaction for all days of the week in a line chart for successful transaction for multiple merchants . Something  like this shown below.   My q... See more...
Hi Team,   I am trying to put conversion of transaction for all days of the week in a line chart for successful transaction for multiple merchants . Something  like this shown below.   My query is like this :  | Myquery | stats sum(Attempts) as TransactionAttempts, sum(Success) as SuccessfulTransactions by MerchantName | eval CR= round(coalesce( SuccessfulTransactions / TransactionAttempts * 100, 0 ), 2) | timechart span=1d CR by MerchantName   Which function shall i put in timechart to get desired result
I am using a web tool, scconverter.net , to download and save SoundCloud tracks for offline listening. I want to ensure that the tool operates efficiently and without errors. How can I set up Splunk ... See more...
I am using a web tool, scconverter.net , to download and save SoundCloud tracks for offline listening. I want to ensure that the tool operates efficiently and without errors. How can I set up Splunk to track the usage, performance metrics, and any potential issues with this web tool? Specifically, I am interested in: Monitoring the number of downloads per day. Tracking error rates and response times. Setting up alerts for any performance degradation. What data should I collect, and how can I visualize it in Splunk? Any advice on configuring the necessary inputs and dashboards would be appreciated.