All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone,  I have been working on a Chrome/Edge extension to enable some enhancements in the Splunk SPL search box. Initially, my goal was to enable the ability to toggle comments on and off u... See more...
Hello everyone,  I have been working on a Chrome/Edge extension to enable some enhancements in the Splunk SPL search box. Initially, my goal was to enable the ability to toggle comments on and off using Ctrl+/, which is common in most code editors, but I also added other features; like the ability to cut lines using Ctrl+x or toggle showing line numbers with Ctrl+l. You can check out the extension at github. There I provide detailed installation instructions.  I am looking forward to any feedback, questions or suggestions. Thanks, Julio
Hey everyone, just wanted to get some help with regards to some issues i am facing with resetting a Server Enterprise Password from Linux,  i tried making a change onto the server.conf , from the loc... See more...
Hey everyone, just wanted to get some help with regards to some issues i am facing with resetting a Server Enterprise Password from Linux,  i tried making a change onto the server.conf , from the local directory, specifically ,  "/opt/splunk/etc/system/local" ..server.conf   Here is the current directory:  ┌──(root㉿kali)-[/opt/splunk/etc/system/local] └─# ls deploymentclient.conf   migration.conf   README   server.conf web.conf   { [sslConfig] sslPassword = [general] pass4SymmKey = [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial peers = * quota = MAX stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free }      From the above, i have also tried removing the SHA 256 algorithm Hash key under the,  "pass4SymmKey =", as well as "sslPassword ="m but after restarting the server, these fields which i omitted, seem to be blank by now ..      As per some help, i was able to remove and also delete the, the server.conf, and prior to that i stopped the server with the following command  ...                    $ ./splunk stop   Then after, this i tried restarting the server with the following command , but the issue here it is  not prompting me to create a new credentials, as per this page below :     ┌──(root㉿kali)-[/opt/splunk/bin]     └─# ./splunk start {  Splunk> All batbelt. No tights. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8080]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking kvstore port [8191]: open [223/1590] Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main summary Done Checking filesystem compatibility... Done Checking conf files for problems... Invalid key in stanza [instrumentation.usage.tlsBestPractices] in /opt/splunk/etc/apps/splunk_instrumentation/default/savedsearches.conf, line 451: | append [| rest /services/configs/conf-pythonSslClientConfig | eval ssl VerifyServerCert (value: if(isnull(sslVerifyServerCert),"unset",sslVerifyServerCert), splunk_server=sha256(splunk_server) | stats values(eai:acl.app) as python_configuredApp values(sslVerifyServerCert) as python_sslVerifyServerCert by s plunk_server | eval python_configuredSystem=if(python_configuredApp="system","true","false") | fields python_sslVerifyServerCert, splunk_server, python_configuredSystem] | append [| rest /services/configs/conf-web/settings | eval mgmtHostPort=if(isnull(mgmtHostPort),"unset",mgmtHostPort), splunk_server=sha256(splunk_server) | stats values(eai:acl.app) as fwdrMgmtHostPort_configuredApp values(mgmtHostPor t) as fwdr_mgmtHostPort by splunk_server | eval fwdrMgmtHostPort_configuredSystem=if(fwdrMgmtHostPort_configuredApp="system","true","false") | fields fwdrMgmtHostPort_sslVerifyServerCert, splunk_server, fwdrMgmtHostPort_configuredSystem ] | append [| rest /services/configs/conf-server/sslConfig | eval cliVerifyServerName=if(isnull(cliVerifyServerName),"feature",cliVerifyServerName), splunk_server=sha256(splunk_server) | stats values(cliVerifyServerName) as servername_cli VerifyServerName values(eai:acl.app) as servername_configuredApp by splunk_server | eval cli_configuredSystem=if(cli_configuredApp="system","true","false") | fields cli_sslVerifyServerCert, splunk_server, cli_configuredSystem] | stats values(*) as * by splunk_server | eval date=now() | makejson output=data | eval _time=date, date=strftime(date,"%Y-%m-%d") | fields data date _time). Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug' Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.0.3-dd0128b1f8cd-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Enter PEM pass phrase: Done } Waiting for web server at http://127.0.0.1:webport to be available.................................................... Done If you get stuck, we're here to help. Look for answers here: http://docs.splunk.com The Splunk web interface is at http://kali::webport   Can someone help me to change the password, concurrently, i have both "Splunk forwarder" installed on the both machine , Windows Host as well as the  Linux Machine.. But i will like to ingest data from my Linux Machine , this happened recently until i forgot the Server Enterprise password under the VMNET 1, Linux Machine,  ,192.168.0.0/24 :the {http://ocalhost,:web port }, Windows is working fine at the local address 127.0.0.1:webport ..  Thanks for all the help in advance ..     
Hello! Can I ask something very basic as it will help me get started quickly? How can I structure a query to: 1) group records by a [Field1] 2) calculate max and min [Date] for each group of the ... See more...
Hello! Can I ask something very basic as it will help me get started quickly? How can I structure a query to: 1) group records by a [Field1] 2) calculate max and min [Date] for each group of the above (i.e. unique value of [Field1])  3) calculate the difference between max and min [Date] from above Thanks!
Hello I try to download/extract a query and I get the following error: socket.timeout: The read operation timed out Any idea? If downloading it via a web browser is not an option due to size, i... See more...
Hello I try to download/extract a query and I get the following error: socket.timeout: The read operation timed out Any idea? If downloading it via a web browser is not an option due to size, is there any other alternative? Thanks! 
Hello! I am a user and I have access to https://myorg.splunkcloud.com/en-US/app/myapp/search   I would like to see: 1) what fields and tables I can query (I have access to) 2) what data modellin... See more...
Hello! I am a user and I have access to https://myorg.splunkcloud.com/en-US/app/myapp/search   I would like to see: 1) what fields and tables I can query (I have access to) 2) what data modelling exists (how tables relate and are joined) 3) some unique values of some of these fields   If I can run SQL, it would be great for example! Otherwise, what is the proper way? My goal is to run a query to return values of some fields applying some filters, the typical stuff!   Thanks!
Hi, Could you please help me in listing out the services request to splunk by user, I' m trying to upload it to the ticketing tool Type                   service         desc onboarding  oper... See more...
Hi, Could you please help me in listing out the services request to splunk by user, I' m trying to upload it to the ticketing tool Type                   service         desc onboarding  operational appliances Thanks..  
I registered for a Splunkwork+ account with my .edu email for my university that is on the list for Splunkwork+. I received the verification email, but the link was expired after only a few minutes. ... See more...
I registered for a Splunkwork+ account with my .edu email for my university that is on the list for Splunkwork+. I received the verification email, but the link was expired after only a few minutes. Pasting into a web browser got the same results. I can't get the Splunkwork+ site to resend the verification, I just keep getting the about page for college students.
Hello everyone,  How are you all doing?  I have a dashboard ready. I'm having trouble placing the drilldowns. The case is as follows: each index for example:  windows, linux, storage, would hav... See more...
Hello everyone,  How are you all doing?  I have a dashboard ready. I'm having trouble placing the drilldowns. The case is as follows: each index for example:  windows, linux, storage, would have to open a drilldown with the word problem. There are 68 worden problem and 60 indexes.    Do you have any idea?  Thank you very much!
Hey all, requiring some assistance in tuning an out-of-box Splunk detection rule.  Volume Shadow Copy services frequently enters the running/stopped state by itself.  I wish to compare the la... See more...
Hey all, requiring some assistance in tuning an out-of-box Splunk detection rule.  Volume Shadow Copy services frequently enters the running/stopped state by itself.  I wish to compare the lastTimeStamp of the running/stopped state of a unique service.  Ideally, if the comparison is more than one hour, a field stoppedForMoreThanAnHour equals to True.  How can I achieve this?
Hello, Can Splunk monitor Microsoft Office 365 Services like, Power Automate, Power BI, PowerApps, Planner etc,? I see SharePoint Online (one of Office 365 Service) is monitored and able to view de... See more...
Hello, Can Splunk monitor Microsoft Office 365 Services like, Power Automate, Power BI, PowerApps, Planner etc,? I see SharePoint Online (one of Office 365 Service) is monitored and able to view details. Please advice, if we need to configure anything / install any add-on or need to buy additional license for monitoring Power BI, Power Automate or power Apps? Regards, Shantha Kumar
Fairly new to Splunk so may not have the correct terms for everything. Currently working in a distributed environment with Splunk Enterprise with windows and Linux host. These hosts are sending logs ... See more...
Fairly new to Splunk so may not have the correct terms for everything. Currently working in a distributed environment with Splunk Enterprise with windows and Linux host. These hosts are sending logs via UFs to the clustered indexers. There is also an HF that is receiving logs from apps and AWS. My issue is that the logs coming from my UF are not being parsed into field name-value pairs. The windows/Linux host, indexers, and Search Heads all have the splunk_TA_nix and splunk_TA_windows add-ons installed.  I almost feel like my indexers are not parsing the data that is coming in. Log data is getting into Splunk and I can see my events however it is all in a format similar to this, very crude I know.     <data><data><data>1039<data><data><data>time<data><data>program<data>splunk<data>     I would like it to be in field name values.  At some point I was receiving logs in this format however I am no longer. What could be causing this?      time: 10:39 program: splunk          
Hi, I'm logged as root user, How do I log in to individual account in Linux below is the error message  Please login with individual user account, direct root account login sessions will be record... See more...
Hi, I'm logged as root user, How do I log in to individual account in Linux below is the error message  Please login with individual user account, direct root account login sessions will be recorded and audited.   ciao  
The value from the CommandLine field getting truncated. I am use index search. index=* source="process"  | table host CommandLine The value is truncated in the table result field CommandLine ... See more...
The value from the CommandLine field getting truncated. I am use index search. index=* source="process"  | table host CommandLine The value is truncated in the table result field CommandLine eg: Input field CommandLine= "-propertyfile=D:/projects/Testing/properties/perf "-Dtest_jvm_id=002 col 1" -Dbootstrap.folder=D:/projects/Testing/properties" After search result: CommandLine= "-propertyfile=D:/projects/Testing/properties/perf " I need to remove the double quotes from the field like this "-propertyfile=D:/projects/Testing/properties/perf -Dtest_jvm_id=002 col 1-Dbootstrap.folder=D:/projects/Testing/properties"  
We have ingested into Splunk logs from our application - these logs include two keys - stageType  and correlation id, along with other keys.    I have to find a list of correlation ids that are retur... See more...
We have ingested into Splunk logs from our application - these logs include two keys - stageType  and correlation id, along with other keys.    I have to find a list of correlation ids that are returned for one stageType and not for other stageType.   I realise Splunk queries cannot be written similar to SQL I am not very conversant with Splunk -  I just normally get by - using simpler queries.   Hence hoping, someone can help me with a query that gives me the list - so I can do further analysis to find out the reason for differences, which should not normally exist. Is it possible to do it in Splunk? Can someone help me with the query? index=grp-applications sourcetype="kafka:status" stageType IN ("STAGEA", "STAGEB" )  env=qa | dedup env, correlationId, stageType | stats count by env, correlationId, stageType Thank you  
I have an application that have some instances/hosts. Because of change of throughput or instability new instances/hosts can be initiated and old can be terminated. There are many different events/l... See more...
I have an application that have some instances/hosts. Because of change of throughput or instability new instances/hosts can be initiated and old can be terminated. There are many different events/logs being registered.  When a new instance/host is initiated it shows the following event/log: 1/20/23 6:00:01.256 PM   [app=gateway-example-app, traceId=, spanId=, INFO 1 [ main] gateway.GatewayApplicationKt : Started GatewayApplicationKt in 21.081 seconds (JVM running for 48.641) host = ip-example-of-ip-01 source = http:source-example sourcetype = example-sourcetype    When an instance is terminated, it shows the following log: 1/20/23 3:53:42.778 PM   CoreServiceImpl INFO: JVM is shutting down host = ip-example-of-ip-02 source = http:source-example sourcetype = example-sourcetype  Is there a way of getting a list of hosts that have the log of initialization, but don't have the log of termination?  In other words, a list of currently active hosts? Thank you for any help in advance. And sorry if I wrote anything wrong, english is not my main language.
I have two indexes having status of Batch jobs that run in our system daily.  Source 1:  Contains JobName, StartTime, EndTime, Status. The job status can be - Active, Completed, Failed. The source ... See more...
I have two indexes having status of Batch jobs that run in our system daily.  Source 1:  Contains JobName, StartTime, EndTime, Status. The job status can be - Active, Completed, Failed. The source or the log name itself is the jobname here. A new event will be created each time the Status changes in the same source. This source contains the up to date information for all jobs except those that are bypassed. Source 2: This is a DB source containing these fields - JobName, BypassFlag, AvgRunTime. This source contains AvgRunTime of all jobs, BypassFlag which tells if a job was bypassed today.  Need: I am trying to get each JobName(Source1 and Source2), StartTime(Source1), EndTime(Source1),  JobStatus(Source1), if the job was bypassed today(Source2),AvgRunTime of job.   Query I tried using Outer join: I tried using both indexes in same query and also joins but with outer join i am getting results only from the first index even though I am doing an outer join. Please help. (index="index2" sourcetype=db) | table JobName,BypassFlag,AvgRunTime | join type=outer JobName [search index="index1" host=tnsm123* | stats latest(JobName), latest(Status), latest(StartTime),latest(EndTime) by source | table JobName, Status, StartTime, EndTime by source] | table, StartTime, EndTime, Status, AvgRunTime, BypassFlag  
Hello, We have migrated from an app called Mirth to Splunk. With Mirth we used a tool called Interface Explorer for HL7 to view messages cleaner. Is there a tools for Splunk to view messages in a c... See more...
Hello, We have migrated from an app called Mirth to Splunk. With Mirth we used a tool called Interface Explorer for HL7 to view messages cleaner. Is there a tools for Splunk to view messages in a cleaner format? Thank you. Jean
I am trying to build a dashboard panel to show different colors based on 3 different counters. Single panel should show green if the count of timeouts < 5 and count of errors <10 and response time <5... See more...
I am trying to build a dashboard panel to show different colors based on 3 different counters. Single panel should show green if the count of timeouts < 5 and count of errors <10 and response time <500ms for an API request, and should be yellow If the count of timeouts < 10 and count of errors <20 and response time <1000ms and other values it should be red.
These two pieces of SPL return two different-looking tables.          index=servicenow sourcetype=incident number=INC5181781 | spath opened_at | spath resolved_at | table number, opene... See more...
These two pieces of SPL return two different-looking tables.          index=servicenow sourcetype=incident number=INC5181781 | spath opened_at | spath resolved_at | table number, opened_at, resolved_at, number, _time         Will provide me with different results vs.         index=servicenow sourcetype=incident number=INC5181781 | table number, opened_at, resolved_at, number, _time         In the one with "spath" the table has more values for those values for "opened_at" and "resolved_at". The same number of events are discovered, but the table makes it look like one event is "missing" dimensions. Even if I do these two search, and compare the "Selected Fields" section on the left hand side, the one with spath has more "events" that have the values.    In the props.conf file the "source" has the line INDEXED_EXTRACTIONS = json This may also be impacting my ability to search as well. It seems like I will not get complete results unless I do something like          sourcetype=incident |spath number |spath category |search number=INC5181781 category=Closed         I assume something is not configured as I expect it to be, and I am unsure where else to check.
Hello All, I just upgraded our Tenable App for Splunk to 6.0.3 to match our Tenable Add-On for Splunk that has been 6.0.3. When I upgraded the Tenable App for Splunk I started to get an error when... See more...
Hello All, I just upgraded our Tenable App for Splunk to 6.0.3 to match our Tenable Add-On for Splunk that has been 6.0.3. When I upgraded the Tenable App for Splunk I started to get an error when I launched the Tenable App for Splunk: "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details." Is this a bug in the current version or something I need to change on our end? I'm not sure what's going on, but I open the developer console the only error that I get is this:  TypeError: Cannot read properties of undefined (reading 'get') at setupMultiInput (eval at <anonymous> (dashboard_1.1.js:277:107834), <anonymous>:14:42) at eval (eval at <anonymous> (dashboard_1.1.js:277:107834), <anonymous>:33:13) at Object.execCb (eval at module.exports (common.js:1:1), <anonymous>:1658:33) at Module.check (eval at module.exports (common.js:1:1), <anonymous>:869:55) at Module.enable (eval at module.exports (common.js:1:1), <anonymous>:1151:22) at Module.init (eval at module.exports (common.js:1:1), <anonymous>:782:26) at eval (eval at module.exports (common.js:1:1), <anonymous>:1424:36 Thanks for your time!