All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've searched and searched on this and it's time to just ask the question. I want to create a simple transaction count bar overlayed with the average duration line.  I can do this when hard setting ... See more...
I've searched and searched on this and it's time to just ask the question. I want to create a simple transaction count bar overlayed with the average duration line.  I can do this when hard setting the transaction name.      However, when I try to generate an overlay using a dropdown of transaction names there is no overlay.  I feel like I'm just not passing a correctly formatted token to charting.chart.overlayFields option.   I've tried, <option name="charting.chart.overlayFields">"avg(duration):" $ID$</option> <option name="charting.chart.overlayFields">$ID$</option> <option name="charting.chart.overlayFields">avg(duration): $ID$</option> and probably dozens more as I'm throwing darts. Any ideas here?
Hi Splunkers, I am developing a custom app and I am using React to build an user interface for it. I have tested on my local and it works well. But when I copy all files to /opt/splunk/etc/apps/flow... See more...
Hi Splunkers, I am developing a custom app and I am using React to build an user interface for it. I have tested on my local and it works well. But when I copy all files to /opt/splunk/etc/apps/flowdock/default/data/ui/alerts this directory, I got error like below picture: How can we solve this issue? Thanks in advance.
Hello, I'm writing a simple dashboard with a time picker and some panels. I try to display the from/to time selected by user in panel header. It works if user select Date/Time range, but for re... See more...
Hello, I'm writing a simple dashboard with a time picker and some panels. I try to display the from/to time selected by user in panel header. It works if user select Date/Time range, but for relative time period (e.g. last 1 day, last 15 minutes), the earliest and latest time are non-numeric values like -d@d, now, etc. Possible to get the search start/end time for relative time period cases? Thanks a lot. Regards /ST Wong
I have a python custom search command that generates a dendrogram plot file i.e. dendrogram.png. The custom command writes the image file to the app's /appserver/static/images folder. A dashboard dis... See more...
I have a python custom search command that generates a dendrogram plot file i.e. dendrogram.png. The custom command writes the image file to the app's /appserver/static/images folder. A dashboard displays dendrogram.png so an analyst can make changes to the custom search command's arguments. When the custom search command is run again and the image file, dendrogram.png, is overwritten in the apps static images folder. Is there a way to have the updated image display, in a production environment, without restarting Splunk and reloading the page?
If I try to search phantom container events by label, status or several other fields, I don't see events relating to containers created by the email poll-based ingestion feature of Phantom. Why don'... See more...
If I try to search phantom container events by label, status or several other fields, I don't see events relating to containers created by the email poll-based ingestion feature of Phantom. Why don't they show up?
Hi, Currently i am using sendresults command to send email but in the body of the email i want to add few lines where i need to have new line between the lines.  But when i use \r\n it doesn't work... See more...
Hi, Currently i am using sendresults command to send email but in the body of the email i want to add few lines where i need to have new line between the lines.  But when i use \r\n it doesn't work. All the lines come as merged in the email_body can you please help. Thanks.
Hi All, I am a newbie to Splunk Enterprise Security and currently I am trying my hands on Splunk ES to explore more on the SIEM areas. I have couple of questions on basic working principles on CIM ... See more...
Hi All, I am a newbie to Splunk Enterprise Security and currently I am trying my hands on Splunk ES to explore more on the SIEM areas. I have couple of questions on basic working principles on CIM and correlation searches. I understand once we normalize the incoming data according to CIM compatible. Splunk automatically links with the particular datamodel based on tags for example Malware_Attacks datamodel links the incoming data(Indexed and normalized data which is available in index named test) with tags malware and attack. and also the datamodel acceleration is enabled. In this scenario for correlation searches the tstats command looks into the tsidx file to get the search results. My question here is how Splunk scans multiple indexes in my case the data is available in test index and there may be indexes called test1, test2 and all of these indexes has CIM compatible data for Malware. Whether of all these data in each of the indexes compiled into one tsidx files for each datamodel or it uses different techniques to scan each of the indexes for getting the result. Please correct if any of my above understanding is incorrect. Your help is appreciated !!! Thanks Aashiq
Hi, How to perform a field extraction on a field from a lookup table? I'm trying to add another field so the data model in Splunk Enterprise Security can recognise the field. The issue i'm havi... See more...
Hi, How to perform a field extraction on a field from a lookup table? I'm trying to add another field so the data model in Splunk Enterprise Security can recognise the field. The issue i'm having is field extraction in props.conf and transforms.conf happen before the lookup table. I tried the AS command after OUTPUT on the lookup, but it renames the default field from the Windows Add-on. I only want to add another field and not rename the fields in the Add-on. REPORT- in props.conf and transforms.conf works on any other field except fields from lookup tables. I need to perform the field extraction in the Add-on and not in SPL. Thanks in advanced.
Hello everyone, Please help me in my search. In a non production environment I inherited, there was a licence that expired this week. There was multiple users declared in that environment. So, I ... See more...
Hello everyone, Please help me in my search. In a non production environment I inherited, there was a licence that expired this week. There was multiple users declared in that environment. So, I asked for a dev licence in my name, but the new licence is way too generous for indexing capacity (I definitely do not need 50Gb per day) and deleted all my existing users except admin. Do you have any suggestion ? It doesn't seem you can ask for any options in a dev licence... Thanks in advance. Regards, emallinger
I've switched my license from the trial to free and in doing so, the users I created are no longer there, which has created an orphaned embed. It tells me to reassign it to a valid user, but when I t... See more...
I've switched my license from the trial to free and in doing so, the users I created are no longer there, which has created an orphaned embed. It tells me to reassign it to a valid user, but when I try to reassign it, it tells me I have to unembed it first, but embeded reports are also not a feature of splunk free, so it appears I also cannot do that. I've tried to delete the job but it just reappears. Here is the notification that is always on: "Splunk has found 1 orphaned searches owned by 1 unique disabled users. Click to view the orphaned scheduled searches. Reassign them to a valid user to re-enable or alternatively disable the searches." How can I remove this embed so I can reassign or delete the job, thereby stopping the error message?
Hi I tried to use 'appNameStrategy' options as described in doc https://docs.appdynamics.com/display/PRO45/Enable+Auto-Instrumentation+of+Java+Applications as part of enabling auto-instrumentation u... See more...
Hi I tried to use 'appNameStrategy' options as described in doc https://docs.appdynamics.com/display/PRO45/Enable+Auto-Instrumentation+of+Java+Applications as part of enabling auto-instrumentation using cluster agent in k8s cluster ( PKS cluster), but i could not get it work. I am getting below error [WARNING]: 2020-07-29 01:05:47 - deploymenthandler.go:245 - Cannot start instrumentation. Error getting Application and Tier names for deployment demo-app instrumentationMethod: Env nsToInstrumentRegex: demo-ns appNameStrategy: label appNameLabel: app defaultEnv: JAVA_OPTS resourcesToInstrument: [Deployment,StatefulSet] imageInfo: java: image: "dev.registry.ews.int/vendored/docker.io/appdynamics/java-agent:20.6.0" agentMountPath: /opt/appdynamics netvizInfo: bciEnabled: true port: 3892 The error is same for 'appNameStrategy: namespace'.  Can someone guide me if i am missing something here ?
I'm just starting out with splunk and have a few CSVs that i'm trying to import. the main one contains library records from the past decade. It originally contained both in and out time, but i've bro... See more...
I'm just starting out with splunk and have a few CSVs that i'm trying to import. the main one contains library records from the past decade. It originally contained both in and out time, but i've broken it down to be as simple as possible to start with and given it just the initial check out time. The times are all written in the same manner. There's about 134000 entries total.  game_id,attendee_id,event_id,check_out_time 328,7199,5,2010-09-05 01:32:57 228,7241,5,2010-09-05 01:33:13 379,7327,5,2010-09-05 01:33:51 96,6729,5,2010-09-05 01:34:37 when i go to import this data , the timestamp is always wrong:  I've read through numerous forum posts on splunk and other websites and had multiple people double check the csv; I cannot identify why it struggles to pull the correct timestamp for these events.    Any help will be greatly appreciated. 
For the past few days, after upgrading the infrastructure from 7.3.2 to the latest GA (8.0.5),  I'm having problems when running ad-hoc searches on an SHC. To give you more context about the Splunk i... See more...
For the past few days, after upgrading the infrastructure from 7.3.2 to the latest GA (8.0.5),  I'm having problems when running ad-hoc searches on an SHC. To give you more context about the Splunk infrastructure I'm talking about, I've described it at the end of the post. Following is the problem I'm facing: When I connect to the SHC using the VIP and I run whatever search, the system raises the following error after 5-10 seconds. I couldn't find any relevant information by looking at the logs. When I connect directly to any of the Search Heads and I run the same search, it runs smoothly without any problem. I found the following Known Issues (SPL-192057, SPL-188608) that seem to match this behavior. These are pretty recent though, but I can't find which Splunk versions are affected.  Did anyone face this before? What do you think I should do? Splunk Infrastructure 3 Search Heads These SH are in a Search Head Cluster (SHC) configured to distribute the searches on both Indexers Load balancer in front of the SHC 2 Indexers 2 Heavy Forwarders + multiple Universal Forwarders 1 Deployment Server 1 Cluster Master
For example, if we have several events and there is a field named from, which is only existed in the first event. Is it possible to append this value to another event? I'd like to save it as a tempo... See more...
For example, if we have several events and there is a field named from, which is only existed in the first event. Is it possible to append this value to another event? I'd like to save it as a temporary value and then use it later. I tried with eval temp=from, but I cannot use it(temp) in later events. Thanks in advance!
We are setting up Splunk for our application and need to load historical logs. But the time stamp for many of the event are taken as current/wrong dates. We cannot make changes to our conf files as t... See more...
We are setting up Splunk for our application and need to load historical logs. But the time stamp for many of the event are taken as current/wrong dates. We cannot make changes to our conf files as they are shared among multiple projects. Is there an alternative to modify '_time' values to a value in the log file. without changing conf files?
I have a question on the use of eval on a UA String. I want to do a lookup on a UA String and call out the version of Chrome the UA String has. At the moment I have covered most UA Strings however I ... See more...
I have a question on the use of eval on a UA String. I want to do a lookup on a UA String and call out the version of Chrome the UA String has. At the moment I have covered most UA Strings however I would to display only a part of the UA String to table that into a count stats. Current UA String = Mozilla/5.0+(Windows+NT+6.3;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/80.0.3987.149+Safari/537.36 At the moment I just have everything other then IE to list as other, however, I'd like to list that as the Chrome Browser Version. This is my current search: | eval Browser=case(like(cs_User_Agent,"%;+MSIE+8.0;%"), "Internet Explorer 8", like(cs_User_Agent,"%+MSIE+ 7.0%"), "Internet Explorer 8", like(cs_User_Agent,"%;+MSIE+9.0;%"), "Internet Explorer 9", like(cs_User_Agent,"%;+MSIE+10.0;%"), "Internet Explorer 10", like(cs_User_Agent,"%;+rv:11.0%"), "Internet Explorer 11", like(cs_User_Agent,"%;+Trident/7.0;+%"), "Internet Explorer 11", 1==1, "Other")  
Hi All, I need a spl which will return the list of filenames that came for the latest time . | eval latest_time = max(strftime(_time,"%Y-%m-%d")) | stats count by latest_time,filename But im not a... See more...
Hi All, I need a spl which will return the list of filenames that came for the latest time . | eval latest_time = max(strftime(_time,"%Y-%m-%d")) | stats count by latest_time,filename But im not able to achieve that through the above spl. eg Latest_time             filename 2020-07-28             filename1.txt                                       filename2.txt                                       filename3.txt                                       filename4.txt
Hi, I am attempting to use dbxquery to fetch and transform the results of an extended events session from MSSQL Server 2016 which has been saved to a file. I have three saved searches which follow... See more...
Hi, I am attempting to use dbxquery to fetch and transform the results of an extended events session from MSSQL Server 2016 which has been saved to a file. I have three saved searches which follow the same format, two of which work and one which doesn't, and behaves quite peculiarly. When I attempt to run the query below, I get a ChunkedExternProcessor error:  07-28-2020 23:32:01.824 ERROR ChunkedExternProcessor - Failed attempting to parse transport header: ,,,,,,,,,,\r\r 07-28-2020 23:32:01.924 ERROR ChunkedExternProcessor - Error in 'dbxquery' command: Invalid message received from external search command during search, see search.log.   | dbxquery query="SELECT event_data = CONVERT(XML, event_data) INTO #<TempTableName> FROM sys.fn_xe_file_target_read_file('<PathToLogFile>/LogFile*',null,null,null) SELECT name = event_data.value(N'(event/@name)[1]', N'varchar(max)'), errorNumber = event_data.value(N'(event/data[@name="error_number"]/value)[1]', N'varchar(max)'), severity = event_data.value(N'(event/data[@name="severity"]/value)[1]', N'varchar(max)'), message = event_data.value(N'(event/data[@name="message"]/value)[1]', N'varchar(max)'), hostname = event_data.value(N'(event/action[@name="client_hostname"]/value)[1]', N'varchar(max)'), username = event_data.value(N'(event/action[@name="username"]/value)[1]', N'varchar(max)'), [sql] = event_data.value(N'(event/action[@name="sql_text"]/value)[1]', N'varchar(max)'), session_id = event_data.value(N'(event/action[@name="session_id"]/value)[1]', N'varchar(max)'), query_hash = event_data.value(N'(event/action[@name="query_hash"]/value)[1]', N'varchar(max)'), database_id = event_data.value(N'(event/action[@name="database_id"]/value)[1]', N'varchar(max)'), client_app_name = event_data.value(N'(event/action[@name="client_app_name"]/value)[1]', N'varchar(max)') FROM #<TempTableName>" connection="<My connection name>" I am confused why my other queries, which follow this same format work, yet this one returns this error. If I reduce the number of fields to just 'name', the error changes to one of two possibilites: 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: Traceback (most recent call last): 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbxquery_bridge.py", line 90, in <module> 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: main() 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbxquery_bridge.py", line 86, in main 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: bridge.connect() 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbxquery_bridge.py", line 38, in connect 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: self.read_from_dbxquery_server_write_to_stdout() 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: File "C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\bin\dbxquery_bridge.py", line 74, in read_from_dbxquery_server_write_to_stdout 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: data = recv(1024 * 1024) 07-28-2020 23:36:16.429 ERROR ChunkedExternProcessor - stderr: ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine or, 07-28-2020 23:37:45.389 ERROR ChunkedExternProcessor - Failed attempting to parse transport header: eported,\r\r 07-28-2020 23:37:45.490 ERROR ChunkedExternProcessor - Error in 'dbxquery' command: Invalid message received from external search command during search, see search.log.   The name field should read "error reported" so the 'eported' implies some information is being missed, and the connection is being closed too early, but I am not sure why this would occur. Any ideas? Thanks
Hi folks, been banging my head against this for hours and am sure I am missing something obvious.  I have tried using eval and evenstats in various iterations, but no dice. Essentially I am trying t... See more...
Hi folks, been banging my head against this for hours and am sure I am missing something obvious.  I have tried using eval and evenstats in various iterations, but no dice. Essentially I am trying to have the below Alert/Search executed (which compares a volume of errors from one minute to the next).  The problem we've been running into, is that we have a field value SESSID that is assigned to unique sessions, and at times one "stuck" session can really muddy up this valuable canary alert for us.  My goal is to either exclude all results with an offending SESSID (e.g. where its stats percentage exceeded 10%) or just not trigger the alert for that particular minute.     host=app* LOGLEVEL=ERROR earliest=-2m@m latest=-1m@m | stats count as LastMinute | join host [search host=app* LOGLEVEL=ERROR earliest=-3m@m latest=-2m@m |bucket _time span=1m | stats count as PrevMinute ] | eval HigherThanPrevMinute=(3*PrevMinute) | where LastMinute > HigherThanPrevMinute |where LastMinute > 1000 |table LastMinute,PrevMinute       Thank you in advance! -Armen    
Does anyone have any experience with the VersionControl for Splunk App? Planning to use this for backup/restore of Splunk conf files and knowledge objects (DR strategy) I’m planning to come up with ... See more...
Does anyone have any experience with the VersionControl for Splunk App? Planning to use this for backup/restore of Splunk conf files and knowledge objects (DR strategy) I’m planning to come up with a strategy for backing up and restoring the knowledge objects (i.e. dashboards, reports, alerts, saved searches) and configuration files associated to Splunk ES and the various apps/add-ons that shall be part of the deployment at our organization. This is mainly to ensure that all of our Splunk items are capable of being restored to our Disaster Recovery site in the event that Production experiences prolonged downtime. Our Splunk search heads and Management consoles (Deployment Server, Index Cluster Master) in DR shall be on cold standby and unavailable, unless we need to start them up if a disaster occurs. Would anyone know if the VersionControl for Splunk App ( https://splunkbase.splunk.com/app/4355/#/details ) is any good? As long as I'm able to backup my Splunk conf files and KO's from Production, and restore these to my DR site in the event of a disaster/prolonged Production downtime, then I'm comfortable with leveraging this app as a DR strategy. I'm less concerned about version control since we'll only have 4 people managing our Splunk ES deployment and we won't have thousands of KO's to take care of here.