All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I want to build a dashboard from which users can add rows to a lookup, in those rows many cells are calculated (for example an automatically generated ID for each row) and I want this to re... See more...
Hello, I want to build a dashboard from which users can add rows to a lookup, in those rows many cells are calculated (for example an automatically generated ID for each row) and I want this to remain transparent for the user. Users have to fill one or multiple text input fields, the fields left unchanged have to be set to * (I don't mind having * shows as an initial value in the input fields). Clicking the submit button must reset all fields again to * . I tried adapting an existing answer about emptying the input fields but with no luck (I don't have enough Karma points to share the link). I thank you in advance for your time and support. Best regards.
XML to dispaly with proper lining format. below is example with different data 1.<Call><splunk>201</splunk><Time>4547554</Time></Call> 2.<Call><script>201</script><callId>IND00900</callId></Call... See more...
XML to dispaly with proper lining format. below is example with different data 1.<Call><splunk>201</splunk><Time>4547554</Time></Call> 2.<Call><script>201</script><callId>IND00900</callId></Call>
Hi, Folks. Say, I have a file with 1 line of sample text. My goal is to emulate patterns like this: 1 AM = 10 events 2 AM = 10 events 3 AM = 15 events 4 AM = 20 events ... 1 PM = 1... See more...
Hi, Folks. Say, I have a file with 1 line of sample text. My goal is to emulate patterns like this: 1 AM = 10 events 2 AM = 10 events 3 AM = 15 events 4 AM = 20 events ... 1 PM = 1000 events 2 PM = 1200 events 3 PM = 700 events 4 PM = 300 events ... and so forth. I understand that I can use the likes of minuteOfHourRate, hourOfDayRate, etc to have this kind of pattern IF I have sample files with multiple lines of sample event in it. Is it possible to do the same if I only have 1 line in my sample file? Please advise. Thank you.
Hi Team, My scenario is I have multiple request and response xmls which are basically my events in index for one circuit id. Basically, whenever I request with the circuit id from UI it will creat... See more...
Hi Team, My scenario is I have multiple request and response xmls which are basically my events in index for one circuit id. Basically, whenever I request with the circuit id from UI it will create a new transaction id for that particular hit which means logs will have multiple requestids for the same circuit id for 1 day. What I need is when I search with the circuit ID it should give me a table output showing all the different request ids along with their specific response fields in a single row. My challenge here is I am trying to show the fields from request & response xmls from multiple source files into a single row but it is returning multiple rows. Please help if there is any way to get this done.
I have field values as below , field1=value1 filed2=server1 field1=service/value2/a1 field2=server2 field1=value3 field2=server3 field1=service/value4/a2 fi... See more...
I have field values as below , field1=value1 filed2=server1 field1=service/value2/a1 field2=server2 field1=value3 field2=server3 field1=service/value4/a2 filed2=server4 field1=value5 field2=server5 field1=service/value6/a2 filed2=server4 field1=value7 field2=server6 field1=service/value8/a2 filed2=server2 I am getting few extra strings on field1 from server2 and server4. Now i want to check, if log is from server2 or server4, then truncating pre and post random values and save only actual value My final output field should be like below field1=value1; value2; value3; value4; value5; value6.. etc
Hello all, I have a field with data that looks like this: The process has failed. Please review. Dear Team Please assign to Team Process blah blah to blah blah Please review logs. ... See more...
Hello all, I have a field with data that looks like this: The process has failed. Please review. Dear Team Please assign to Team Process blah blah to blah blah Please review logs. Sincerely Support I want to remove all linebreaks like so: The process has failed. Please review blah: Dear Team Please open a new Incident and assign to Team blah Submitted from 1928389112828 blah. Please review attached logs. Sincerely, Support. I've tried sed to do it: | rex mode=sed field=description "s/(\n+)//g" , but the output still has extra spaces at the beginning. I've also tried trim(description) but it's giving me the same result. Any help would be appreciated. Thanks.
Hi Folks, We have planned to extend resources in entire cluster deployment(SH,Idxr,IdxMaster..). These are running RHEL 6 on VMs. Do we really require restart of splunkd after OS is updated t... See more...
Hi Folks, We have planned to extend resources in entire cluster deployment(SH,Idxr,IdxMaster..). These are running RHEL 6 on VMs. Do we really require restart of splunkd after OS is updated to latest hardware changes on fly? any advises are appreciated. Pramodh B Splunker Jr.
I have installed Splunk 8.0 version with Splunk ODBC driver"splunk-odbc_211" Windows Server 2012 R2 Std x64. Windows Server 2016 R2 Std x64. However I'm getting below error The setup routi... See more...
I have installed Splunk 8.0 version with Splunk ODBC driver"splunk-odbc_211" Windows Server 2012 R2 Std x64. Windows Server 2016 R2 Std x64. However I'm getting below error The setup routine for the Splunk ODBC Driver ODBC Driver could not be loaded due to system error code 126: The specified module could not be found (C:\Program Files\Splunk ODBC Driver\lib\SplunkDSII.dll). What should I install or configure next, or do you have any other advice? Thank you.
Hi all, We want to add the dashboard(not the PDF) link in the email, so that whenever the user clicks on the link can access the dashboard which displays the data for last 1 hr from the time of ge... See more...
Hi all, We want to add the dashboard(not the PDF) link in the email, so that whenever the user clicks on the link can access the dashboard which displays the data for last 1 hr from the time of generation of email. Thinking that we can fetch the email generation time as the latest time of dashboard base search through the link present in email message. suppose email message : link to dashboard:"//localhost:port/dashboard" dashboard base search: <query> .....</query> <earliest>-1h</earliest> <latest>$tok$</latest> this $tok$ in dashboard should fetch the email generation time in supported format. Is it possible? correct me with the approach. please help in achieving this.
Hi, In our environment Nagios and Splunk are integrated. We configured an alert in Nagios monitoring tool which fetches data from Splunk but in Nagios monitoring tool, it is showing as "UNKNOWN... See more...
Hi, In our environment Nagios and Splunk are integrated. We configured an alert in Nagios monitoring tool which fetches data from Splunk but in Nagios monitoring tool, it is showing as "UNKNOWN - Error in Application name "wms" ". The alert is configured in such that it is using the script check_splunk_savedsearch_value.sh and it is taking three arguments. check_splunk_savedsearch_value.sh -a wms -s "WMOS - EW - Number of Allocation records" -w 1 [root@nagios server]# ./check_splunk_savedsearch_value.sh -a wms -s "WMOS - EW - Number of Allocation records" -w 1 UNKNOWN - no output returned from splunk.ce.corp|"wms:WMOS - EW - Number of Allocation records"=ERROR When we ran the script in debugging mode, the following command is not returning any output. [root@nagios server ]# /usr/bin/curl -s -k -u username:password https://splunk.ce.corp:8089/servicesNS/monitor/wms/search/jobs/export -d 'search=savedsearch %22WMOS%20%2d%20EW%20%2d%20Number%20of%20Allocation%20records%22' -d output_mode=csv|sed 1d [root@nagios server]# What could be the reason. We see that Splunk forwarder is not installed on Nagios Production server. Is Splunk forwarder needs to be installed on Nagios production server.
Hi, how to exclude internal source IP events for a sourcetype (web_logs) with src_ip=10.0.0.0/8 before indexing.
I'd like to search the status of Incident Review, and have found 2 ways to do it. 1)| inputlookup append=T es_notable_events 2)| inputlookup append=T incident_review_lookup Other than above 2 ... See more...
I'd like to search the status of Incident Review, and have found 2 ways to do it. 1)| inputlookup append=T es_notable_events 2)| inputlookup append=T incident_review_lookup Other than above 2 ways, many person may look to use "| incident_review " for that. Maybe "| incident_review " contains 2)| inputlookup append=T incident_review_lookup. Ideally, I want to display detail information like rule_title,src,user etc. So I guess 1)| inputlookup append=T es_notable_events is better. However I'd like to know the differences between "es_notable_events" and "incident_review_lookup". Best Regards,
I have one missing event out of 168 events from our Universal Forwarder. I've already checked the internal logs and the file has been indexed "Batch input finished reading file=", but I cannot find t... See more...
I have one missing event out of 168 events from our Universal Forwarder. I've already checked the internal logs and the file has been indexed "Batch input finished reading file=", but I cannot find this source on my index. I also tried to expand time range and nothing appears, then check if the forwarder was restarted on the time of file was index, but it is not. Settings on my forwarder is: inputs.conf [batch://my_path] move_policy = sinkhole disabled = false sourcetype = my_sourcetype index = my_index outputs.conf [tcpout] defaultGroup = default-autolb-group-forwarder [tcpout:default-autolb-group-forwarder] disabled = false server = myIndexer:9997 useACK = true
HI, I have my query and doesn't seem to convert from MB to GB. What am I doing wrong? Can anyone help me? index= * | eval TotalMB=round((TotalSent+TotalRcvd)/1024/1024,2) | eval TotalGB=r... See more...
HI, I have my query and doesn't seem to convert from MB to GB. What am I doing wrong? Can anyone help me? index= * | eval TotalMB=round((TotalSent+TotalRcvd)/1024/1024,2) | eval TotalGB=round(TotalMB/1024,2) | stats sum(sentbyte) AS TotalSent, sum(rcvdbyte) AS TotalRcvd by app | addtotals | dedup app | sort limit=30 - total
Hi I have queries that does not run on db connect, but it will be run on informix server and return result. What is the reason? select * from syscolumns; select * from systables a, systable... See more...
Hi I have queries that does not run on db connect, but it will be run on informix server and return result. What is the reason? select * from syscolumns; select * from systables a, systables b; update t1 set rowsize = rowsize +100; Thanks
When creating rules, either with Custom Rules in APM or Include/Exclude Rules in EUM, the tool gives the ability to match the URL/URI based on several conditions: Is in List, Starts With, etc... Thes... See more...
When creating rules, either with Custom Rules in APM or Include/Exclude Rules in EUM, the tool gives the ability to match the URL/URI based on several conditions: Is in List, Starts With, etc... These same options exist in other areas, as well. How can a rule be created that lists values (Is in List) and  utilize wildcards in the values? For example (asterisk is wildcard): Is in List /site/portal1*, /site/portal2*
I have a scheduled PDF that I need to display the dates the report was run for. Unfortunately, I just learned that the tokens will not display in the Scheduled PDF as they do when I open the dashboar... See more...
I have a scheduled PDF that I need to display the dates the report was run for. Unfortunately, I just learned that the tokens will not display in the Scheduled PDF as they do when I open the dashboard and then export PDF. My other option is then to display the dates the report was run for in the email message. I went through the Splunk documentation and several other posts but cannot figure out why my results token is still coming across as blank when email gets sent out. Here is the search I have created to establish the start and end date for the report. It is the first search and only returns one row of results. I removed using |table due to other posts suggestions and am using |fields instead. I even tried by removing |fields all together and still have no results in my email message. <search> <query>| makeresults | eval start = strftime(relative_time(now(), "-1w@w0"),"%m-%d-%Y") | eval end = strftime(relative_time(now(),"@w6"),"%m-%d-%Y") | eval message="Report was generated from "+start+" through "+end | fields message</query> <progress> <set token="message">$result.message$</set> </progress> </search> This search is not contained in a panel. It was set up to be outside any panels and then use a token to display as a title for a html panel for text. In the PDF schedule under Message, I simply put in $result.message$ but is completely blank when emailed. I even tried $message$ since it was set up as the token name but of course that didn't work either. Anyone have any ideas of why this isn't working or another way to get the dates to display on my scheduled PDF?
After several years the replication factor on my 6.6.3 index cluster recently changed to 'not met'. it has been fine in the past, and I can see buckets replicating among the 4 members of the cluster... See more...
After several years the replication factor on my 6.6.3 index cluster recently changed to 'not met'. it has been fine in the past, and I can see buckets replicating among the 4 members of the cluster. I'm able to see open connections on :9887 among the members, they show up in each others' splunkd.log as successful replications, and nothing has changed configuration-wise or even version-wise. I've got two members of the cluster with 200G less room than the other two (each), and I cannot find anything that can help me figure out what the problem is. The monitoring console says 'Not Met', the show cluster-config says 'not met', and the --verbose output on the master looks like this for every entry, with the Replicated copies and Searchable copies trackers showing the same numbers all across the board: network_wireless_aps Number of non-site aware buckets=0 Number of buckets=28 Size=366897499 Searchable YES Replicated copies tracker 28/28 28/28 Searchable copies tracker 28/28 28/28 network_wireless_controllers Number of non-site aware buckets=0 Number of buckets=35 Size=19069528111 Searchable YES Replicated copies tracker 35/35 35/35 Searchable copies tracker 35/35 35/35 (the same is true for the search-factor, but I figure if one gets better the other will too)
Hello, Someone made changes to GPO that negatively impacted devices in our environment. I searched event code 4733, but I haven't been able to link an admin account to the activity. I found this q... See more...
Hello, Someone made changes to GPO that negatively impacted devices in our environment. I searched event code 4733, but I haven't been able to link an admin account to the activity. I found this query below on another thread (https://answers.splunk.com/answers/172845/how-to-correlate-the-admin-user-with-a-gpo-change.html) but there seems to be an issue with the ldap portion. Would anyone know what the issue is and/or know something better I can use? Thank you. index=* EventCode=5137 OR EventCode=5136 OR EventCode=5141 Class=groupPolicyContainer |rex field=DN "(?i)CN\=(?<gpo_guid>.*?)\," |eval action=case(EventCode=5137, "CREATED", EventCode=5136, "MODIFIED", EventCode=5141, "DELETED") |ldapfilter domain=*DOMAIN* search="(&(objectclass=groupPolicyContainer)(|(cn=$gpo_guid$)(displayName=*{*}*)))" attrs="displayName" |convert ctime(_time) as Time |table _time Security_ID EventCodeDescription action gpo_guid displayName
Hello I'm trying to run a rex command to extract "is set to expire" Relying party trust 'ButterCup Games - Test' xxxxx: Signing certificate with thumbprint '1111111111111111111111' is set to ex... See more...
Hello I'm trying to run a rex command to extract "is set to expire" Relying party trust 'ButterCup Games - Test' xxxxx: Signing certificate with thumbprint '1111111111111111111111' is set to expire on 2/13/2020 6:59:59 PM.