All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm learning how to use the HTTP Event collector, but no events ever show up in search. I have the inputs enabled and my token set up as shown: When I run the command 'curl -k http://<instance-h... See more...
I'm learning how to use the HTTP Event collector, but no events ever show up in search. I have the inputs enabled and my token set up as shown: When I run the command 'curl -k http://<instance-host>:8088/services/collector -H "Authorization:Splunk 4f99809e-55d3-4677-b418-c0be66693311" -d "{\"sourcetype\": \"trial\", \"event\":\"Hello World!\"}"' in my command prompt, I get back {"text": "Success", "code": 0}. I followed along with the tutorial on this site here: https://www.youtube.com/watch?v=qROXrFGqWAU I've also tried changing the sourcetype to json_no_timestamp, but this didn't work either. I'm confident that I've set up everything correctly, but nothing seems to be working. Is there a fix for this? Because I'm trying to do the same with collectd metrics.
I have error messages in the following formats     { "level":"error", "message":"Log: \"error in action {\\\"status\\\":\\\"error\\\",\\\"message_error\\\":\\\"blacklisted\\\"}\"", "timestamp":"20... See more...
I have error messages in the following formats     { "level":"error", "message":"Log: \"error in action {\\\"status\\\":\\\"error\\\",\\\"message_error\\\":\\\"blacklisted\\\"}\"", "timestamp":"2021-09-27T16:39:07-04:00" }     and     { "level":"error", "message":"Log: \"error in action \\\"&lt;HTML&gt;&lt;HEAD&gt;\\\\n&lt;TITLE&gt;Service Unavailable&lt;/TITLE&gt;\\\\n&lt;/HEAD&gt;&lt;BODY&gt;\\\\n<h1>Service Unavailable - Zero size object</h1>\\\\nThe server is temporarily unable to service your request. Please try again\\\\nlater.<p>\\\\nReference&#32;&#35;15\\\\n&lt;/BODY&gt;&lt;/HTML&gt;\\\\n\\\"\"", "timestamp":"2021-09-26T23:12:25-04:00" }     Now I am creating a dashboard for displaying the overall error counts for a period of time. The following query gives me the count based on the message_error.     index=my_index_name sourcetype=my_source_type_name:app | spath message | regex message="^.*error in action.*$" | eval error_json=replace(ltrim(message, "Log: \"error in action"),"\\\\\"","\"") | spath input=error_json output=error_message path=message_error | top error_message     As what I am doing is JSON parsing, it is not applicable for the second type of error message. This is basically HTML after the common error string. I would like to print the count for this error along with the counts of the errors which belong to the first group. For the first group of errors, by using the above query, I am getting the following result error_message count blacklisted 10 captcha error 9 Internal Server Error 8   What I need is error_message count blacklisted 10 captcha error 9 Internal Server Error 8 Service Unavailable 5   That is I need to show the count of errors even if it is not in the JSON format. Both the errors start with the common string "Log: error in action".  If I use another query like :     index=my_index_name sourcetype=my_source_type_name:app | spath message | regex message="^.*Service Unavailable - Zero size object.*$"| stats count as error_count     it will give the count. But first I want to combine the results and show them as a single result and second the above query is limited for a specific error message. So I would like to show a part of the message after "Log: error in action", if it is not in JSON format and the corresponding count. I am new to Splunk and It will be very much helpful if someone can point out the solution for this. 
Is there any way to hide system messages on dashboard panels? I already set "depends" at html tags but the messages show up for a second. If somebody know that there is an option to make  such messa... See more...
Is there any way to hide system messages on dashboard panels? I already set "depends" at html tags but the messages show up for a second. If somebody know that there is an option to make  such messages invisible before searches run, please teach me. My source XML is below. <row> <panel depends="$panel_show2$"> <html> <div style="float:left"><span style="font-size: 1.5em;">$HOST2$</span></div> <div style="float:right">$latest_log_time2$</div> <style> div.left{ text-align: left; } div.right{ text-align: right; } </style> </html> <single> <search depends="$panel_show2$"> <query>`cae-real_time_monitoring_tiles($company$, $host2$)`</query> <earliest>-1y</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="height">139</option> <option name="numberPrecision">0.0</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="rangeValues">[0,60,80,100]</option> <option name="refresh.display">progressbar</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">1</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">small</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unit">%</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> <option name="link.openSearch.visible">false</option> <drilldown> <link target="_blank">app_utilization_realtime_detail?form.company=$company$&amp;form.application=$host2$&amp;form.feature=$trellis.value$&amp;form.department=.+&amp;drilldowned=1</link> </drilldown> </single> </panel> </row>
Here's my challenge, I've got multiple same IP with Different Attack_type categories, I'm trying to combine all the same IP  together and make a chart that shows the Attack_type and just 1 IP   S... See more...
Here's my challenge, I've got multiple same IP with Different Attack_type categories, I'm trying to combine all the same IP  together and make a chart that shows the Attack_type and just 1 IP   Something like this but a dashboard that list out what IP is associated to all the different Attack_type     
Hi, I have around 5 panels in a dashboard which have their own child panel also.. Each these panels have table, where the datas are fetched from log. I was in need to convert all the parent panels i... See more...
Hi, I have around 5 panels in a dashboard which have their own child panel also.. Each these panels have table, where the datas are fetched from log. I was in need to convert all the parent panels into a tabs in a dashboard.
Hi  I have a message body text as like below message : RequestBody : :  :{                                  individualValue : {                                                    xxxxxx;        ... See more...
Hi  I have a message body text as like below message : RequestBody : :  :{                                  individualValue : {                                                    xxxxxx;                                                    YYYY; -----------------------having many txt in between these } } How can i fetch the string "individualValue" from the message body
I was trying to extract an ip address field. During a search, using |rex "[[ipv4]]" works fine and creates an ip field.  I then wanted to save this field extraction, so I used the field extractor t... See more...
I was trying to extract an ip address field. During a search, using |rex "[[ipv4]]" works fine and creates an ip field.  I then wanted to save this field extraction, so I used the field extractor to do so, edited the regular expression to [[ipv4]] and saved it, but it did not work. I tried taking it down a level, editing the saved regular expression to  (?<ip>[[octet]](?:\.[[octet]]){3}) which also works while using the rex command during a search, but did not work saving it in the field extractor. I took it down one final level changing it to (?<ip>(?:2(?:5[0-5]|[0-4][0-9])|[0-1][0-9][0-9]|[0-9][0-9]?)(?:\.(?:2(?:5[0-5]|[0-4][0-9])|[0-1][0-9][0-9]|[0-9][0-9]?)){3}) which doesn't use modular regular expressions, but finally does work in both the search and the saved field extraction. I haven't found anything in the splunk docs that say modular regular expressions can't be used in the field extractor, so I thought it would be best to check here if that was the case, or if there is maybe some other issue I can't think of.
I have a column that has events recorded in an interval of 1 hour. Example: Date                                                          Value 2010-1-1 1:00                                       ... See more...
I have a column that has events recorded in an interval of 1 hour. Example: Date                                                          Value 2010-1-1 1:00                                         20                2010-1-1 2:00                                         22 2010-1-1 3:00                                          21 2010-1-1 4:00                                          19 2010-1-1 5:00                                           16 ...............................                                              ........ 2010-1-1 24:00                                         12   I want to group this as one row i.e display in the following format Date                                              Value 2010-1-1                                    Calculate average of 24 values   I want to achieve this in splunk
Hello  here's is my problem, I made a search which calculates duration between two jobs. Jobs are supposed to run during our ovn. So the first starts around 10pm and the last around 00.30 so +- 2h3... See more...
Hello  here's is my problem, I made a search which calculates duration between two jobs. Jobs are supposed to run during our ovn. So the first starts around 10pm and the last around 00.30 so +- 2h30 after. it's working fine but if the job A starts later (e.g.  09/04 at 00.09) then I can't get the calculation and get two empty rows . 09/06/21 02:30:42 21:50:00 00:20:41 09/04/21 00:00:00 03:19:24 03:19:24  << 09/03/21 00:00:00 00:09:52 00:09:52  << 09/02/21 02:31:56 21:56:44 00:28:40 it should display only 1 line for that ovn : 09/04/21 03:09:32 00:09:52 03:19:24  sometimes it's ok i guess it's because job A started very later 4.36AM and after previous Job B run. 09/20/21 02:19:10 22:02:02 00:21:12 09/18/21 02:48:11 04:36:59 07:25:10  <<< ??  09/16/21 02:14:33 22:22:41 00:37:13     <query>| tstats latest(evt_tsk_id) as evt_tsk_id, latest(evt_tsk_status) as evt_tsk_status, latest(evt_mes_occ_ts) as evt_mes_occ_ts, latest(evt_mes_occ_ts_epoch) as evt_mes_occ_ts_epoch where index=INDEX1 APP_env=ENV1 APP_inst=INSTANCE (evt_tsk_id ="JOB_A" AND evt_tsk_status="1") OR (evt_tsk_id ="JOB_B" AND evt_tsk_status="2") by _time span=1H | bucket _time span=6H | stats min(evt_mes_occ_ts_epoch) as start, max(evt_mes_occ_ts_epoch) as end by _time | eval N_duration = tostring(round(end-start,0), "duration") | eval _time = strftime(_time,"%m/%d/%y") | convert timeformat="%H:%M:%S" ctime(start) AS JOB1 | convert timeformat="%H:%M:%S" ctime(end) AS JOB2 | rename _time as date | table date N_duration JOB1 JOB2 | reverse</query>     thanks in advance
I have recently created a field extraction on one search head that I have assigned all apps and users to read and write and was wondering how long is would take for a change done in one search head t... See more...
I have recently created a field extraction on one search head that I have assigned all apps and users to read and write and was wondering how long is would take for a change done in one search head to get replicated to other search heads? Also from what I know is that changes done via the GUI are always replicated to other SHs, is this true? If so what CAN and CANNOT be replicated across other search heads via gui. Thanks, Regards,
Hi All, We are planning to configure some of our universal forwarders to use multiple pipeline sets. Do you have some sort of SPL that we can use to identify which forwarders have blocking queues an... See more...
Hi All, We are planning to configure some of our universal forwarders to use multiple pipeline sets. Do you have some sort of SPL that we can use to identify which forwarders have blocking queues and needs to increase the number of pipeline set.
Hi, I have probably and easy question for the ones that have done this before. I have set up an universal forwarder to collect windows performance counters, the collection and forwarding works fine. ... See more...
Hi, I have probably and easy question for the ones that have done this before. I have set up an universal forwarder to collect windows performance counters, the collection and forwarding works fine. The thing i am curious about  is that In the forwarders inputs config I have specified it to collect:   stats = average;min;max;dev;count   But in Splunk i receive an event containing value, min, max, dev and count. Everything except the  "average" value. Is this contained in the value field instead, or have I done something wrong in the config? collection=Processor object=Processor counter="% Idle Time" instance=_Total Value=97.1635562216005 Min=59.084268219671145 Max=99.46225663681797 Dev=6.00739691330151 Count=300   from config: [perfmon://Processor] index = main interval = 600 counters = % Processor Time;% Idle Time object = Processor instances = * formatString = %.20g instance = _Total;% Idle Time samplingInterval = 2000 stats = average;min;max;dev;count mode = single disabled = false
Hi, We are using Microsoft SQL Server as a Database for one of the Applications. For the Microsoft SQL Server by default, we are able to see the Basic Hardware metrics like CPU Usage, Memory Usa... See more...
Hi, We are using Microsoft SQL Server as a Database for one of the Applications. For the Microsoft SQL Server by default, we are able to see the Basic Hardware metrics like CPU Usage, Memory Usage, and Disk I/O. Is it possible to get the Disk Usage also by using DB Agents? Regards, Madhusri R
I want to get metrics from multiple index/sourcetype combinations - have been using the append clause and subquery to do it but need to process a lot of events and hit the limitations of subqueries a... See more...
I want to get metrics from multiple index/sourcetype combinations - have been using the append clause and subquery to do it but need to process a lot of events and hit the limitations of subqueries and although i get all the data from the primary query the appends get truncated.   Im sure there is an easy way of doing this and its what splunk is meant to do but cant work out how to cater for the different manipulation that needs to be done depending on the index and sourcetype. The follow is a relatively simple one but i have more complex queries which need to calculate rates from absolute values etc.   So basically have 3 queries ( one that needs a join so i can do some calculations) keep _time host and the metric I want and then do the visualisation.   index=windows sourcetype=PerfmonMk:Memory host IN(host1,host2,host3) | join type=outer host [ search index=windows sourcetype=WMI:ComputerSystem host IN(host1,host2,host3) earliest=-45d latest=now() | stats last(TotalPhysicalMemory) as TPM by host | eval TPM=TPM/1024/1024] | eval winmem=((TPM-Available_MBytes)/TPM)*100 | fields _time host mem |append [search index=linux sourcetype=vmstat host IN(host4,host5,host6) | where isnotnull(memUsedPct) | eval linmem=memUsedPct | fields _time host mem] |append [ search index=unix sourcetype="nmon-MEMNEW" host IN(host7,host8,host9) | where isnotnull("Free%") | eval aixmem=100-'Free%' | fields _time host mem] | eval host=upper(host) | timechart limit=0 span=1h perc95(mem) as Memory_Utilisation by host
Hi guys, I am very new to Splunk and this is only my first week using it. What I am wanting to do is view the performance logs of my own local machine and then put it into a dashboard. It would also... See more...
Hi guys, I am very new to Splunk and this is only my first week using it. What I am wanting to do is view the performance logs of my own local machine and then put it into a dashboard. It would also be good to be able to get the number of times I have logged into my laptop if that is possible. The questions is, Do I need to use a universal forwarder to be able to do all this ? I am not sure, from what I have read online the universal forwarder is used for remote machines but because its local would I need to use one. I can imagine this being a very noobie question but need the help if someone is able to.   Thank You
Hi, I have a query which I am not sure why its not working, Assume I have the following JSON record, which has been extracted at index-time index: network sourcetype: devices record: { "deviceId... See more...
Hi, I have a query which I am not sure why its not working, Assume I have the following JSON record, which has been extracted at index-time index: network sourcetype: devices record: { "deviceId" : 1234, "hostName": "Router1} 1. index=network sourcetype=devices deviceId=1234 => works as expected 2. index=network TERM(sourcetype::devices) => works as expected 3. index=network TERM(sourcetype::devices) deviceId=1234 => Fails, returns 0 records 4. index=network TERM(sourcetype::devices) earliest=-7d@d => Fails, returns 0 records 5. index=network sourcetype::devices deviceId=1234 => works as expected 6. index=network sourcetype::devices deviceId::1234 => works as expected 7. index=network sourcetype::devices deviceId::1234 earliest=-7d@d => works as expected The real question is, why do queries 3 and 4 fail, when the others work, especially when I can see that query 2 works and returns the correct data. What impact does TERM() have in the process flow, such that earliest and = make it fail ? cheers -brett
Hi, I tried to find this in the docs, but no luck, more than happy to RTM if someone has the link. On the black menu, top right, there is Help, with Sub Menus of: ... Tutorials Help with this ... See more...
Hi, I tried to find this in the docs, but no luck, more than happy to RTM if someone has the link. On the black menu, top right, there is Help, with Sub Menus of: ... Tutorials Help with this page File a bug ... I want to change where these either point to, or want to be able to leverage the link they point to; for example Help with this page : Where do I put my own docs so they will be used ? File a bug: I want this to point to my Jira Tutorials : I want this to point to a wiki or sharepoint or ? Cheers -brett
So I am very new to Splunk and I have just started using it. What I want to do is be able to view my own laptops operating system file logs and performance data. What I have been doing is logging ont... See more...
So I am very new to Splunk and I have just started using it. What I want to do is be able to view my own laptops operating system file logs and performance data. What I have been doing is logging onto my splunk and then selecting the "add data" button. From there I select the "monitor" button. For example I have chosen to monitor  my local events log but for some reason when I try to search anything I get nothing so something is wrong and I dont know what.   Please help
Here's an example of some error logs that simply show which app reported an error and which country: _time(s) sourcetype country 0 app1 US 1 app1 DE 2 app2 DE 65 app2 US ... See more...
Here's an example of some error logs that simply show which app reported an error and which country: _time(s) sourcetype country 0 app1 US 1 app1 DE 2 app2 DE 65 app2 US 66 app2 US 67 app1 DE   Here's the timechart I would like to retrieve(span=1m): _time app1 app2 2021-09-30 00:00:00 {"US": 1, "DE": 1} {"DE": 1} 2021-09-30 00:01:00 {"DE": 1} {"US": 2}   Is this, or something similar, possible?
I have a multi-site cluster, and am planning on decommissioning one to transform it into a single-site cluster. Looking over these two guides: https://docs.splunk.com/Documentation/Splunk/8.0.2/I... See more...
I have a multi-site cluster, and am planning on decommissioning one to transform it into a single-site cluster. Looking over these two guides: https://docs.splunk.com/Documentation/Splunk/8.0.2/Indexer/Decommissionasite https://docs.splunk.com/Documentation/Splunk/8.1.2/Indexer/Converttosinglesite And trying to see how to do both, preferably at the same time. When converting to a single-site, it states to stop the entire cluster, update the configurations, then start the cluster back up. Is there any issue with doing the configurations changes necessary for decommissioning the old site while everything is offline, and only bringing up the remaining site? Basically, current plan is: Stop all nodes Update the Manager Configs Set multi-site to false Set single site search/rep factors Remove site attribute Remove available_sites attribute/site mappings Update Search Head Configs Set multi-site to false Remove site attribute Start nodes that are remaining from new site Would this work, or would it cause conflicts in replication somehow? Do I need to use Splunk commands on the cluster manager to remove the old indexers?