All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi  i want to show Splunk report on a Confluence page. this macro on atlassian market able us make rest api request in pages : https://marketplace.atlassian.com/apps/1211199/pocketquery-for-conflue... See more...
hi  i want to show Splunk report on a Confluence page. this macro on atlassian market able us make rest api request in pages : https://marketplace.atlassian.com/apps/1211199/pocketquery-for-confluence-sql-rest?hosting=cloud&tab=overview   Now I have report that show top errors of web application as pie chart in Splunk. Name of report is “top-errors-week” how can I show this on Confluence page? FYI: this is a dynamic chart and every time I change report date I expect automatically apply on Confluence chart too. any idea?  Thanks 
There are various event codes like eventID = "123" , eventID ="456", eventID = "789" . There are some "appID"   fields that occurs in both eventID = "123"  AND eventID ="456"  (not all "appID" occur ... See more...
There are various event codes like eventID = "123" , eventID ="456", eventID = "789" . There are some "appID"   fields that occurs in both eventID = "123"  AND eventID ="456"  (not all "appID" occur in both these eventID) . So I want to display a list of values from all those "appID"  field which are occurring in both the eventID = "123"  AND eventID ="456" .  Please let me know how can I achieve it. I also have a large data set here. Thank you.
Hi.  First, I've been using this forum for a few months now as I'm new to Splunk.   Thanks to all the contributors on here!! So here's what I'm trying to figure out.   I have a dashboard with 3 lin... See more...
Hi.  First, I've been using this forum for a few months now as I'm new to Splunk.   Thanks to all the contributors on here!! So here's what I'm trying to figure out.   I have a dashboard with 3 line charts.  Each line chart is in it's own unique panel (each panel has it's own unique Id).  I'd like to change the panel background when a returned query value is above a certain number but green when below that number. This is the sample code I'm using to hardcode the 3 panels to either red or green. <row> <panel> <html> <style> #A .dashboard-panel { background: #5AAF71 !important; } #B .dashboard-panel { background: #E35033 !important; } #C .dashboard-panel { background: #5AAF71 !important; } </style> </html> </panel> </row> But I'm looking for a way to dynamically change the panel background colors based on the query returning value. Also, I am not an admin and we don't have permissions to load javascript / CSS files so all of my coding will have to be set into the Dashboard XML. I have used the range before to change colors in the Singles.  Just not sure if it's possible with a panel background that the line chart sits in. thanks in advance!      
Issue: Source log events not forwarded after log rotation. Splunk UF version: /opt/splunk# /opt/splunk/bin/splunk version Splunk Universal Forwarder 7.0.0 (build c8a78efdd40f)   Inputs.conf: [... See more...
Issue: Source log events not forwarded after log rotation. Splunk UF version: /opt/splunk# /opt/splunk/bin/splunk version Splunk Universal Forwarder 7.0.0 (build c8a78efdd40f)   Inputs.conf: [monitor:///var/lib/origin/openshift.local.volumes/pods/*/volumes/kubernetes.io~empty-dir/applog/pipe-co.log] sourcetype = pipe-co ignoreOlderThan = 12h crcSalt = <SOURCE> index = pipe disabled = false Trie Splunk UF PID: ps -ef | grep -i splunk | grep -v grep root 75265 75137 1 19:19 ? 00:00:56 splunkd -p 8089 start ls -al /var/lib/origin/openshift.local.volumes/pods/e43d5812-ebe7-11eb-bf87-48df374d0d30/volumes/kubernetes.io~empty-dir/applog/ total 4876 drwxrwsrwx. 2 root 1000100000 187 Jul 23 20:13 . drwxr-xr-x. 3 root root 20 Jul 23 18:57 .. -rw-r--r--. 1 1000100000 1000100000 960597 Jul 23 20:06 pipe-co-2021-07-23T20-06-16.710.log.gz -rw-r--r--. 1 1000100000 1000100000 964929 Jul 23 20:09 pipe-co-2021-07-23T20-09-57.963.log.gz -rw-r--r--. 1 1000100000 1000100000 963195 Jul 23 20:13 pipe-co-2021-07-23T20-13-26.509.log.gz -rw-r--r--. 1 1000100000 1000100000 2021943 Jul 23 20:14 pipe-co.log Any idea? Thanks
I have a query that finds what I need for the current time and saved it as a scheduled report. However, I also need the same statistics from my historical data but I can't seem to figure out a good w... See more...
I have a query that finds what I need for the current time and saved it as a scheduled report. However, I also need the same statistics from my historical data but I can't seem to figure out a good way to execute it.  The query: index=red_cont | dedup id sortby - _time | where status=="blue" | stats count by level The query is run at the beginning of every hour which is great for current and future but how would I go about getting a snapshot count of every hour from a certain date such as "1/1/21" - till now.  I understand I can do this manually one hour at a time using the time picker and changing the latest hour but that would take a really long time. Thanks 
Hello I'm new to Splunk and I've been given the task to add new types of devices to our Splunk delployment. This includes creating dashboards to be able to find the information we want to know quicke... See more...
Hello I'm new to Splunk and I've been given the task to add new types of devices to our Splunk delployment. This includes creating dashboards to be able to find the information we want to know quicker. Now Currently we use many different devices, Cisco, Juniper and Calix to name a few. We capture all of the information using the same source.  Now what I want to do is create different dashboards for the different types of devices on the network. So you can look at all the different errors or other troubles coming in on certain devices. I tried tagging a few device based on hostname but this seems impractical and very long process. I also tried extracting fields on the various logs that come in. I find there's a lot of conflict since the devices use a different type of message format it causes conflicts when I try to extract fields.  Would it be easier to split up the devices by sending them to diffrent source ie udp xxx1 for cisco xxx2 for juniper and so forth. Or is there an easier way. I have the Cisco IOS app installed and I notice source type from cisco devices is set to Cisco IOS. Would it be easy to set something like that up for my other devices?
Hi Splunk Experts,  I wonder if you could help me putting the below logic in to a search query? Here the link reference to my original question. https://community.splunk.com/t5/Splunk-Search/kv-st... See more...
Hi Splunk Experts,  I wonder if you could help me putting the below logic in to a search query? Here the link reference to my original question. https://community.splunk.com/t5/Splunk-Search/kv-store-search-send-alert-and-also-store-the-the-alert-sent/m-p/560289#M159234 Thanks    "The logic of your requirement seems to be that there are two situations when a user appears in the audit (satisfying the conditions). Either, they are in the list of alerts from yesterday, or they are not. If they were not in the list from yesterday, send an alert and add them to the list (noting when they were added). If they were in the list, don't send an alert but note they were there. now, process the list and remove anyone who didn't appear today (so that an alert will be generated next time they appear on the list), Also, remove anyone who has been on the list for 7 days including today (so that an alert will be generated next time they appear on the list, even if it is tomorrow - day 8)." Day Audit name Alert name at start Alert sent date at start Alert name at end Alert sent date at end Send alert 1 James     James 1 Y   Michael     Michael 1 Y 2 James James 1 James 1 N     Michael 1       3 James James 1 James 1 N   Michael     Michael 3 Y 4 James James 1 James 1 N   Michael Michael 3 Michael 3 N 5 James James 1 James 1 N   Michael Michael 3 Michael 3 N 6 James James 1 James 1 N     Michael 3       7 James James 1 James 1 N   Michael     Michael 7 Y 8 James     James 8 Y     Michael 7      
To provide predictive maintenance. Does the App Splunk Essentials for predictive maint. # 4375 installed on each server individually? Thank u very much for your reply in advance.
I have a custom generating command that returns events to Splunk, however those events are not parsed, so the kv data in them is not available later in the search.  I'm using the python SDK.  I've tr... See more...
I have a custom generating command that returns events to Splunk, however those events are not parsed, so the kv data in them is not available later in the search.  I'm using the python SDK.  I've tried both type='streaming' and type='events' in the @Configuration() decorator.   How do I get Splunk to parse the events I'm giving it?
Hi All, I am stuck with 'Waiting for Input' error for one of the panels that I created in Splunk Dashboard. However, the search runs fine in Search app. Reading through other similar questions, it ... See more...
Hi All, I am stuck with 'Waiting for Input' error for one of the panels that I created in Splunk Dashboard. However, the search runs fine in Search app. Reading through other similar questions, it seems related to tokens. Tried rectifying it but no good. Following are the search and XML: Search: | inputlookup xxxxxxx.csv | stats dc(title) as number_of_rule, values(title) as rules by category | map [| inputlookup yyyyyyyy.csv | eval Date=strftime(_time, \"%m/%d/%Y\") | eval month=strftime(_time, \"%m\") | eval current_month=strftime(now(),\"%m\") | where month=current_month-1 | search index=$$category$$ | stats sum(GB) as GB by index | eval GB=round(GB,3) | eval index=\"$$category$$\", number_of_rule=\"$$number_of_rule$$\" | table index, number_of_rule, GB ]   XML: {     "type": "ds.search",     "options": {         "query": "| inputlookup xxxxxxx.csv\r\n| stats dc(title) as number_of_rule, values(title) as rules by category\r\n| map [| inputlookup yyyyyyyyy.csv\r\n| eval Date=strftime(_time, \\\"%m/%d/%Y\\\")\r\n| eval month=strftime(_time, \\\"%m\\\")\r\n| eval current_month=strftime(now(),\\\"%m\\\")\r\n| where month=current_month-1\r\n| search index=$$category$$\r\n| stats sum(GB) as GB by index\r\n| eval GB=round(GB,3)\r\n| eval index=\\\"$$category$$\\\", number_of_rule=\\\"$$number_of_rule$$\\\" | table index, number_of_rule, GB\r\n]"     },     "name": "Search_8" }   Thanks in advance!
Does anyone have a sample inputs.conf for capturing Windows data such as CPU utilization, memory utilization and disk utilization?  Just looking for the basics.  I could not find any good baseline sa... See more...
Does anyone have a sample inputs.conf for capturing Windows data such as CPU utilization, memory utilization and disk utilization?  Just looking for the basics.  I could not find any good baseline samples. Thank you very much!
Hi folks,  I am new to alert manager and I am trying to configure it - I have splunk cloud - hence my access to the config files is limited .  so far the alerts are all going across without any iss... See more...
Hi folks,  I am new to alert manager and I am trying to configure it - I have splunk cloud - hence my access to the config files is limited .  so far the alerts are all going across without any issue - but when I try to assign it to another person - it seem that it wont let me save any updates or let me reassign the alert entry. Any ideas which file I need to make changes to and what are the changes - I need to be very specific for the Splunk Support team for Cloud services   Any help would be greatly appreciated. 
I'm attempting to pass a variable/value between custom functions in a playbook. I've done this before without issue, but in this scenario I'm running into the following error: "local variable 'json'... See more...
I'm attempting to pass a variable/value between custom functions in a playbook. I've done this before without issue, but in this scenario I'm running into the following error: "local variable 'json' referenced before assignment" I'm attempting to pass an HTML string. But it's erroring on a line in the function I'm trying to pass it to, that's locked out/not editable. get_user_session__ip_list_testing = json.loads(phantom.get_run_data(key='get_user_session:ip_list_testing'))   Any ideas how I can accomplish what I'm after?   Thanks in advance  
Hi Everyone,  I have enabled token based authentication and created few tokens. I can see them in UI but wanted to know where in backend it is stored. I mean which conf file.
Hi, I am trying to configure Universal Forwarder and Heavy forwarder. In UF  I see: Active forwards: None Configured but inactive forwards: A.B.C.D:9997 splunkd.log: 07-23-2021 11:45:00.807 +... See more...
Hi, I am trying to configure Universal Forwarder and Heavy forwarder. In UF  I see: Active forwards: None Configured but inactive forwards: A.B.C.D:9997 splunkd.log: 07-23-2021 11:45:00.807 +0000 WARN AutoLoadBalancedConnectionStrategy [42092 TcpOutEloop] - Applying quarantine to ip=A.B.C.D port=9997 _numberOfFailures=2 07-23-2021 11:45:42.188 +0000 WARN TcpOutputProc [42091 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=A.B.C.D inside output group default-autolb-group from host_src=UF_name has been blocked for blocked_seconds=3000. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 07-23-2021 11:47:22.196 +0000 WARN TcpOutputProc [42091 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=A.B.C.D inside output group default-autolb-group from host_src=UF_name has been blocked for blocked_seconds=3100. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 07-23-2021 11:49:02.204 +0000 WARN TcpOutputProc [42091 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=A.B.C.D inside output group default-autolb-group from host_src=UF_name has been blocked for blocked_seconds=3200. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 07-23-2021 11:50:29.730 +0000 INFO AutoLoadBalancedConnectionStrategy [42092 TcpOutEloop] - Removing quarantine from idx=A.B.C.D:9997 07-23-2021 11:50:29.732 +0000 ERROR TcpOutputFd [42092 TcpOutEloop] - Read error. Connection reset by peer 07-23-2021 11:50:29.734 +0000 ERROR TcpOutputFd [42092 TcpOutEloop] - Read error. Connection reset by peer 07-23-2021 11:50:29.734 +0000 WARN AutoLoadBalancedConnectionStrategy [42092 TcpOutEloop] - Applying quarantine to ip=A.B.C.D port=9997 _numberOfFailures=2   tcpdump also showed me reset from HF side.    I have communication between UF and HF - all necessary ports are open.  [root@UF_name ~]# nc -z -v A.B.C.D 9997 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to A.B.C.D:9997. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. [root@UF_name ~]# nc -z -v A.B.C.D 8000 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to A.B.C.D6:8000. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds. [root@UF_name ~]# nc -z -v A.B.C.D 8089 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to A.B.C.D:8089. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.   How to solve this problem? Any tips?  
I am doing the labs for Fundamentals Part 2 and I am not understanding something  I have to use the startswith and endswith options of the transaction command to display transactions that begin w... See more...
I am doing the labs for Fundamentals Part 2 and I am not understanding something  I have to use the startswith and endswith options of the transaction command to display transactions that begin with an addtocart action and end with a purchase action. The end result should look like this The successful query for that is      index=web sourcetype=access_combined | transaction clientip startswith=action="addtocart" endswith=action="purchase" | table clientip, JSESSIONID, product_name, action, duration, eventcount, price     However, when I try the following query     index=web sourcetype=access_combined | transaction clientip startswith="addtocart" endswith="purchase" | table clientip, JSESSIONID, product_name, action, duration, eventcount, price       the output (shown below) I get is not correct I am interested to know why omitting the "action" filter with startswith and endswith give me a different result and doesn't group them anymore? Thank you in advance for your help
Can you provide the An example of search query or script. If Linux server is shutdown or down or up. I am looking for the best way to setup an shutdown or down or up status alert for Linux server.
Hi I am using a base search to display a token. However, I noticed it flicks to 0 then the number I need. I need something like this -  <!--condition match=" $result.count$ != 0"--> but there is a... See more...
Hi I am using a base search to display a token. However, I noticed it flicks to 0 then the number I need. I need something like this -  <!--condition match=" $result.count$ != 0"--> but there is also a case where I need it to be 0. So how can I get it to display the number only when the job is 100% done. I have tried done, finalised but they all display the 0 at the wrong time and then it changes to the correct number. i have also tried to add in  <condition match=" $job.resultCount$ == 1"> but I still get the 0 and then the number I need   <search base="basesearch_MAIN"> <!-- FInd out how many process are being monitored --> <query>| stats count </query> <progress> <set token="Token_no_of_Process">$result.count$</set> </progress> </search>      
Hi, I am deploying from Splunk 8.1.4 from scratch in our lab and I am finding some difficulties to understand how the data inputs included in the TA are supposed to be managed. Following the offici... See more...
Hi, I am deploying from Splunk 8.1.4 from scratch in our lab and I am finding some difficulties to understand how the data inputs included in the TA are supposed to be managed. Following the official instructions I configured the input.conf and props.conf in /local ,  enabling two stanzas pointing to a test index. [WinEventLog://Application] [WinEventLog://Security] How can I find the new inputs in the GUI? I dont really understand how the TA binds with the UI. I dont see any new input in the local inputs. is this normal?  Also,I read that the index configuration were removed from the add-on and they need to be configured manually. I dont see any recommendation about which index names to use. does not really matter? I can imaging that Windows Apps might expect specific index names to work properly. sorry for the basic questions, I couldnt find the answer myself digging in the documentation. many thanks.     Thanks.
Hello all, THis is probably very easy or impossible in splunk, but I cant find any sufficient answers. I am trying to remove a single property from JSOn event(during parsing or I dont want it at al... See more...
Hello all, THis is probably very easy or impossible in splunk, but I cant find any sufficient answers. I am trying to remove a single property from JSOn event(during parsing or I dont want it at all), e.g. I want remove "country":  property and everything in it in every event which will come to splunk. Is something like that possible?  I have tried some SEDCM in props.conf but no succes. Do you have any ideas? Thank you very much.   { "random": 23, "random float": 28.173, "bool": false, "date": "1990-08-31", "regEx": "helloooooooooooooooooooooooooooooooooooooooooooooooooo world", "enum": "generator", "firstname": "Latisha", "lastname": "Alexandr", "city": "Tiraspol", "country": "Algeria", "countryCode": "MC", "email uses current data": "Latisha.Alexandr@gmail.com", "email from expression": "Latisha.Alexandr@yopmail.com", "array": [ "Dyann", "Christal", "Renie", "Tilly", "Margette" ], "array of objects": [ { "index": 0, "index start at 5": 5 }, { "index": 1, "index start at 5": 6 }, { "index": 2, "index start at 5": 7 } ], "Raquela": { "age": 50 } }