All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I am trying to get "AWS Guard Duty" data into Splunk. However, there is one app "https://splunkbase.splunk.com/app/3790/"  which has been archived. Is there any other app which we can use or... See more...
Hi All, I am trying to get "AWS Guard Duty" data into Splunk. However, there is one app "https://splunkbase.splunk.com/app/3790/"  which has been archived. Is there any other app which we can use or any other method we can follow to get the AWS Guard Duty dat
I have a couple of questions about migrating the ES standalone search head to a clustered search head.  I have tested two different methods and one works well and the Splunk method doesn’t work.  Her... See more...
I have a couple of questions about migrating the ES standalone search head to a clustered search head.  I have tested two different methods and one works well and the Splunk method doesn’t work.  Here is my method.   My Method: Create tarball backups of /opt/splunk/etc/apps and /opt/splunk/etc/users from the standalone ES search head Create a kvstore backup from standalone ES backup Build deployer and search head cluster Setup ldap Setup licensing Extract users.tar in /opt/splunk/etc/shcluster on deployer Extract apps.tar in /opt/splunk/etc/shcluster on deployer Remove all apps that are default install apps Apply shcluster bundle Restore kvstore on captain Restore store kvstore on other two nodes in the cluster Restore entire sh cluster Done Team tested and everything looks like it is there and seems to be working   Splunk’s Method: Create tarball backups of /opt/splunk/etc/apps and /opt/splunk/etc/users from the standalone ES search head Create kvstore backup from standalone ES Build out deployer and search head cluster Setup ldap Setup licensing Install latest version ES on deployer Install all TA that will be used Deploy ES out to cluster Restore users.tar in /opt/splunk/etc/shcluster Restore other apps in /opt/splunk/etc/shcluster This process is more tedious as have to break out all items from ES app individually Deploy apps and users out to shcluster Restore kvstore on captain Restore kvstore on the other two nodes Restart sh cluster Done Team tested and it seems to be working but none of the datamodels are working and does not appear to recognize the kvstore restore.
Hi, We have an issue installing the Universal Forwarder software on Solaris 10 SPARC servers. splunkforwarder-7.3.6-47d8552a4d84-SunOS-sparc.tar.Z The following error occurs directly after accepti... See more...
Hi, We have an issue installing the Universal Forwarder software on Solaris 10 SPARC servers. splunkforwarder-7.3.6-47d8552a4d84-SunOS-sparc.tar.Z The following error occurs directly after accepting the license agreement and entering the Splunk admin username. ld.so.1: splunkd: fatal: libstdc++.so.6: open failed: No such file or directory OS info: SunOS 5.10 Generic_150400-65 sun4v sparc The libc.so version is SUNW_1.23 (as per Splunk Docs: universal forwarders on a Sun SPARC system that runs Solaris need a patch level of SUNW_1.22.7 or later of the C library (libc.so.1)). We have tried updating our LD_LIBRARY_PATH to include the location of the “missing” library.  libstdc++.so.6 is located in /usr/sfw/lib on our servers, so we ran: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/sfw/lib but it didn't work. Any help or guidance would be appreciated. Thanks
Working a bunch with the Trackme app and it's showing a lot of promise. I finally got the right MLTK and Python applications installed, hopefully that will help some of the issues I'm having. When I... See more...
Working a bunch with the Trackme app and it's showing a lot of promise. I finally got the right MLTK and Python applications installed, hopefully that will help some of the issues I'm having. When I modify a data source, choosing "auto lagging" comes back with unexpected results. In particular, I have a data source that only ingests M-F, between 9am and 5pm. The auto percentile lag of this data source for 7-30 days at 1-3 seconds. How would I go about getting a longer average lag time for this source? In addition, how can I tell trackme to not show an alert state on the same sourcetype on a monday morning, since it hasn't gotten any events since the friday before? I don't want to set the lag time as too high as that will interfere w/ monitoring during weekdays. Thanks for your help.
Hi, I have events similar to this example: 1) date1, id1, misc 2) date2, id2, misc 3) date3, , misc 4) date4, id3 and id4, misc The ids in 4) should be split into two separate lines.  The... See more...
Hi, I have events similar to this example: 1) date1, id1, misc 2) date2, id2, misc 3) date3, , misc 4) date4, id3 and id4, misc The ids in 4) should be split into two separate lines.  The result should look like this: 1) date1, id1, misc 2) date2, id2, misc 3) date3, , misc 4) date4, id3 , misc 5) date4, id4, misc But when using makemv with tokenizer lines which do not match, the tokenizers are skipped in the result, e.g.: ... | makemv tokenizer="(id\d)" ID | mvexpand ID | ... Results in: 1) date1, id1, misc 2) date2, id2, misc 3) date4, id3 , misc 4) date4, id4, misc How can I keep the non-matching lines? Is there a way to only use makemv where it is necessary? My workaround at the moment is to append a second search that looks for events with an empty ID and adds those events again after the makemv, e,g,: first search | ... | makemv tokenizer="(id\d)" ID | mvexpand ID | append [ first search again | ... | search NOT ID="*" | ... ] | ... But searching twice can't be an optimal solution. Thank you, Gunnar
The Microsoft Azure Add on for Splunk allows me to enter a custom sourcetype name.  How is this accomplished in the conf files.  I do not see a custom props or transforms created.  Just wondering how... See more...
The Microsoft Azure Add on for Splunk allows me to enter a custom sourcetype name.  How is this accomplished in the conf files.  I do not see a custom props or transforms created.  Just wondering how this works so I can add it to one of my custom apps.  This is impressive.   So for azure:eventhub, as an example, If i name it azure:eventhub:sub001 in the gui, which allows me to search and report easier; i only see azure:eventhub defined in the default/props; however i do not see a custom local/props.conf ; only the custom local/inputs.conf
Hi all I am using following SPL to loop through HTTP Request data in order to extract fields and values and I have 2 issues marked in bold. Streamstats custom count  for 25 does not work and Splun... See more...
Hi all I am using following SPL to loop through HTTP Request data in order to extract fields and values and I have 2 issues marked in bold. Streamstats custom count  for 25 does not work and Splunk does not work well with renaming values inside normal and curly brackets / here market with *. Can anybody help please? Sample Data: Method:::GET###URI:::favicon.ico ###HTTP Version:::1.1###Host:::s.noname.com###X-Real-IP:::12.12.5.1###X-Forwarded-For:::12.9.5.221###X-Forwarded-Proto:::https###X-Forwarded-Port:::443###X-Forwarded-Host:::s.noname.com###User-Agent:::Mozilla/5.0 (Linux; Android 10; ) AppleWebKit/531.36 (KHTML, like Gecko) Chrome/12.0.3904.102 nonameBrowser/10.1.0.300 Mobile Safari/531.36###Accept:::image/webp,image/apng,image/*,*/*;q=0.2###Sec-Fetch-Site:::same-origin###Sec-Fetch-Mode:::no-cors###Referer:::https://s.noname.com/app/home###Accept-Encoding:::gzip, deflate###Accept-Language:::tr-TR,tr;q=0.9,en-US;q=0.2,en;q=0.1###Cookie:::NEW_nonameSearch_s__noname_com=1f3e090555524aecc1ce912; NEWts_nonameSearch_s_cloud_noname_com=152224319; HW_refts_nonameSearch_s_cloud_noname_com=1522243515550; APP_LANG=tr-tr; APP_REGION=te; IO_ts__s_cloud_noname_com=15933020425; NEWvc_nonameSearch_s_cloud_noname_com=5; IO_viewts_nonameSearch_s_cloud_noname_com=159143313### SPL: | streamstats count(25) AS n | eval n = n-1 | eval f=split(RequestContent, "###") | eval f{n}=mvindex(f, {n})  /* | eval fs=split(f{n},":::") /* | eval f{n}V= trim(mvindex(fs, 1)) | eval f{n}H= mvindex(fs, 0) | eval {f{n}H} = f{n}V /*
Hi, I want to integrate Teamcity with splunk so that I can fetch teamcity database in to splunk. What is the best possible ways by which I can implement it?   Thank you,
Hi All, I want to ingest ESXi logs through vrealize in Splunk via syslog. Is there any app to get these logs parse correctly. Currently I installed add-on for ESXi and using source-type=vmw-syslog, ... See more...
Hi All, I want to ingest ESXi logs through vrealize in Splunk via syslog. Is there any app to get these logs parse correctly. Currently I installed add-on for ESXi and using source-type=vmw-syslog, logs which I am getting is OK but in datamodel some fields such as user, dest, action are appearing value "unknown". Could you please help me. Thanks in advance NS  
I'm seeing some mismatch error in splunkd logs every 30 mins, i couldn't find a way to get which saved search is causing the issue.Any idea how to find the cause of this error.   06-29-2020 06:31:1... See more...
I'm seeing some mismatch error in splunkd logs every 30 mins, i couldn't find a way to get which saved search is causing the issue.Any idea how to find the cause of this error.   06-29-2020 06:31:16.821 -0400 ERROR SearchParser - Mismatched ']'.
Why my files that read from directories not showing the event?  
Hi, I want to create an alert to check the traffic on my tomcat servers and triggers it based on the count or percentage. I have this simple query which gives me the idea that around 1 PM the load on... See more...
Hi, I want to create an alert to check the traffic on my tomcat servers and triggers it based on the count or percentage. I have this simple query which gives me the idea that around 1 PM the load on server 4 (red line )was significantly reduced and then it went to zero in next couple of hours. Please find the image attached. How can i set up an alert which should trigger if this type of condition occurs.   index="myindex" sourcetype=access_combined_wcookie | timechart span=1h count by host   Let me know if someone can advice, It will be a great help.
Hi,  i m getting the below error when i m trying to create a ticket from splunk. i m passing this value in custom field: "customfield_12345":{"value":"$result.myfield$"},"customfield_17653":"$resul... See more...
Hi,  i m getting the below error when i m trying to create a ticket from splunk. i m passing this value in custom field: "customfield_12345":{"value":"$result.myfield$"},"customfield_17653":"$result.mytime$","customfield_10678":{"value":"$result.urgency$"},"customfield_19876":{"value":"$result.severity$"},"customfield_18754":{"value": "$result.reporting$"} The error is as follows: ERROR pid=1413 tid=MainThread file=cim_actions.py:message:424 | sendmodaction - worker="abc.com" signature="Unexpected error: Expecting property name enclosed in double quotes: line 15 column 2 (char 463)." action_name="jira_service_desk" search_name="test_jira" and other error  is  ERROR pid=1413 tid=MainThread file=cim_actions.py:message:424 | sendmodaction - worker=" abc.com " signature="json loads failed to accept some of the characters, raw json data before json.loads:={ "fields": { "project":  { "key": "def"  }, "summary": "def_test", "description": "The alert condition for 'def_test' was triggered, results link: 'https://abc.com/ app/TA-jira-service-desk-simple-addon/@go?sid=scheduler_c2YXJ0di5jb20_VEEtaRlc2stc2ltcGxlLWFkZG9u__RM331f78_at_1593419940_82'", "issuetype": { "name": "abc" },  "priority" : { "name": "Medium"  },  \"customfield_12345\":{\"value\":\"hello"},\n\"customfield_17653\":\"2020-06-27T10:29:29.908+1100", \"customfield_10678\":{\"value\":\"Low"},\n\"customfield_19876\":{\"value\":\"High"},\n\"customfield_18754\":{"value": "API"}   } }" action_name="jira_service_desk" search_name="test_jira " Any suggestions...
How to set time range using REST API call
I wish to run a python script that updates files within a monitoring directory, without directly sending any files to the index. All the examples I’ve seen have people running a script and sending lo... See more...
I wish to run a python script that updates files within a monitoring directory, without directly sending any files to the index. All the examples I’ve seen have people running a script and sending logs to their index. Would removing the sourcetype/ index fields make it act the way I want? Or will it behave the way I want as long as I’m not sending logs within the script. Sorry for any confusion. 1 2 3 4 5 [script://./bin/TA-SimpleApp.py] interval = 10 sourcetype = my_sourcetype disabled = False index = main  
We have created the custom Action in ITSI to POST the Episode data to outside world. Steps taken for creation of Custom action: Created App Created Python Script in Bin Folder of application Ale... See more...
We have created the custom Action in ITSI to POST the Episode data to outside world. Steps taken for creation of Custom action: Created App Created Python Script in Bin Folder of application Alert_action.conf creation inside the app Added <AppName> stanza in notable_event_action.conf in ITSI SA-ITOA Folder After this the Action name is visible in Action drop down menu of ITSI Episode. Issue: Whenever we are selecting the custom action it is throwing error. Action = "AppNme" failed to run. Refer logs for more information. When we refer the appserver.log it is giving us the below error Can anyone please help on this, How to deal with this kind of error ? It seems Action itself is not getting called from ITSI what are your views ? Error log- ============================================================= Status="Started", FunctionName="actionExecution", actionInternalName="iPaaS" 2020-06-24 17:01:47,601 INFO [appserver] [notable_event_actions] [get_event_ids] [12252] is_group=True. action_name=`iPaaS` received=`[u'90cb2e34-08f4-463b-a395-4ce4a383e198']` 2020-06-24 17:01:47,603 INFO [appserver] [notable_event_actions] [execute_action] [12252] Generated search command=`search `itsi_event_management_group_index ` itsi_group_id="90cb2e34-08f4-463b-a395-4ce4a383e198" | dedup itsi_group_id | fields * | `itsi_notable_event_actions_temp_state_values` | `itsi_notable_group_lookup` | `itsi_notable_event_actions_coalesce_state_values` | sendalert "iPaaS" param.url="https://external_tool.com "` for action=`iPaaS` with earliest_time=None, latest_time=None 2020-06-24 17:01:48,303 ERROR [appserver] [notable_event_actions] [_parse_and_call_execute_action] [12252] Actions (iPaaS) failed to execute on: [u'90cb2e34- 08f4-463b-a395-4ce4a383e198']. Error: search `itsi_event_management_group_index` itsi_group_id="90cb2e34-08f4-463b-a395-4ce4a383e198" | dedup itsi_group_id | fields * | `itsi_notable_event_actions_temp_state_values` | `itsi_notable_group_lookup` | `itsi_notable_event_actions_coalesce_state_values` | sendalert "iPaaS" param.url="https://external_tool.com "  search failed. Refer search.log at "/services/search/jobs/1593014507.15717/search.log". 2020-06-24 17:01:48,303 INFO [appserver] [notable_event_actions] [_parse_and_call_execute_action] [12252] actionId="None", actionName="notable_event_action", Status="Failed", FunctionName="actionExecution ===============================================================
Good morning all, I'm really hoping someone may be able to assist me, as I'm currently at a loss. I have a lookup that essentially is a reference sheet for some team & user data. There are two colum... See more...
Good morning all, I'm really hoping someone may be able to assist me, as I'm currently at a loss. I have a lookup that essentially is a reference sheet for some team & user data. There are two columns, the 'Team' name and the subsequent usernames all neatly presented with an 'OR' delimiter. The reason for this being that within a dashboard, I required to run a separate search against our RSA log data, using the token of the username(s) values from the lookup.  The lookup is presented as such:-   | inputlookup team_vpn_list.csv | search Team="Heritage Products"    which presents a table as such:- Team team_lvnos Heritage Products fiacjro OR lv14447 OR lv14839 OR lv14989 OR lv15961   So, having searched through here - I've come across the map search function, and so figured that my search would be along the lines of this:-   | inputlookup team_vpn_list.csv | search Team="Heritage Products" | map search="search index=syslog source=*rsa* earliest=-15m $$team_lvnos$$"   However, this returns no results, and i've tried reducing the '$$' to '$' etc in case it perhaps was that. However, from the job inspector, I can see that the search is running the following:- INFO SearchParser - PARSING: search index=syslog source=*rsa* earliest=-15m $"fiacjro OR lv14447 OR lv14839 OR lv14989 OR lv15961"$  So to me, this seems to be referencing the token as hoped, but not returning expected search events from the rsa event data, that I know are there. To confirm, If I run the above search without the $" at the start and end, it returns valid events from the syslog index and source. Please anyone, if you're able to shed some light as to why this is happening, or how to remidiate the issue, i'd be forever thankful. 
Hi all, I am trying to get this kind of visualization. Could anybody help me? Best regards!
Hi, What does these files mean.  In dir /opt/splunk 1.5M    rsa_scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5ba43509e6e89712f_at_1593296280_9250_98A434A0-EF12-4A03-865F-58FC89DB3621 1.5M    schedu... See more...
Hi, What does these files mean.  In dir /opt/splunk 1.5M    rsa_scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5ba43509e6e89712f_at_1593296280_9250_98A434A0-EF12-4A03-865F-58FC89DB3621 1.5M    scheduler__nobody_U0EtSWRlbnRpdHlNYW5hZ2VtZW50__RMD5f155b8fe52024c5b_at_1593277800_8402_D42B43D6-7CD8-49F4-8960-5743B7FBF310   Thanks.   
I have a search query which yields a timechart . I want to show just the weekdays and skip the weekends in the charting of data using timechart.  I have used the clause | eval day_of_week = strftime... See more...
I have a search query which yields a timechart . I want to show just the weekdays and skip the weekends in the charting of data using timechart.  I have used the clause | eval day_of_week = strftime(_time,"%A") | where NOT (day_of_week="Saturday" OR day_of_week="Sunday") | fields - day_of_week in my query before and after the timechart. The data doesn't have the weekend information whereas when this is charted using the timechart I always get the weekends on my x-axis.   Any idea how to solve it?