All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If this is the only thing modifying your metrics index you could verify whether the data is not mcollected at all or just "mistimed". Run | mstats count(*) where index=<your_metrics_index> | trans... See more...
If this is the only thing modifying your metrics index you could verify whether the data is not mcollected at all or just "mistimed". Run | mstats count(*) where index=<your_metrics_index> | transpose 0 | stats sum("row 1") as total over all-time before and after the scheduled search runs and verify the counts
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single... See more...
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single instance on Splunk Cloud. However, when it is installed on a cluster it does not work properly. The add-on when installed is supposed to create some CSVs files and store those in the add. However, when it is installed on a cluster splunk environment, it suddenly will not create the CSVs file and just do not download the files it was supposed to download. Any help or advise is welcome please. This is the add-on below. https://classic.splunkbase.splunk.com/app/7002/#/overview
Hi @anna, The chart, timechart, xyseries, and tstats commands all produce output suitable for a run chart, depending on our source events. The chart and timechart commands bin _time into spans auto... See more...
Hi @anna, The chart, timechart, xyseries, and tstats commands all produce output suitable for a run chart, depending on our source events. The chart and timechart commands bin _time into spans automatically or using the span size we specify: index=web sourcetype=access_common | chart limit=0 usenull=false count over _time span=1h by status index=web sourcetype=access_common | timechart limit=0 span=1h usenull=false count by status Note that the timechart command pads the results with empty bins spanning our search time range. To remove extraneous bins, set the fixedrange argument to false: | timechart fixedrange=false limit=0 span=1h usenull=false count by status The xyseries command first requires binning _time using the bin command and then aggregating status values using the stats command. Like the chart and timechart commands, the bin command bins the target field into spans automatically or using the span size we specify; however, the stats command does not generate empty bins. We use the makecontinuous command after the xyseries command to add missing _time bins: index=web sourcetype=access_common | bin _time span=1h | stats count by _time status | xyseries _time status count | makecontinuous _time | fillnull value=0 The tstats command is similar to the stats command but works with indexed fields or terms and data models. We can pipe the output through either the timechart command or the xyseries command to group events by status over _time. | tstats prestats=true count from datamodel=Web.Web by _time Web.status | rename Web.* as * | timechart limit=0 span=1h usenull=false count by status Like the chart and timechart commands, the tstats command normally bins _time into spans automatically or using the span size we specify; however, when using a datamodel without the prestats argument, specify a span: | tstats count from datamodel=Web.Web by _time span=1h Web.status | rename Web.* as * | xyseries _time status count | makecontinuous _time | fillnull value=0 If the status field is indexed, the tstats command can reference it directly: | tstats prestats=true count where index=web sourcetype=iis by _time sc_status | rename sc_status as status | timechart limit=0 span=1h usenull=false count by status If the status field and value appear in _raw as a key-value pair not separated by a major breaker, we can use the PREFIX() directive to access the status value as if it were indexed. Given events like the following: Nov 23 12:00:00 my_server my_app[1234]: c_ip=192.0.2.1 request="GET /favicon.ico" status=500 bytes=1024 we can execute: | tstats prestats=true count where index=web sourcetype=access_common by _time PREFIX(status=) | rename status= as status | timechart limit=0 span=1h usenull=false count by status The prestats argument in these examples instructs the tstats command to produces results suitable for the chart, stats, and timechart commands. The prestats argument is not required, but it allows subsequent commands to work as they would following other generating commands. The most common generating command is the search command, which we use implicitly in every search that doesn't begin with a pipe. We can find more information on breakers, the tstats command, and the PREFIX() directive in the Splunk documentation. Irrespective of the method used to count events, we can add a total field by piping the results through the addtotals command: index=web sourcetype=access_common | timechart limit=0 span=1h usenull=false count by status | addtotals fieldname="total requests" We can filter status values while retaining a total request count in two (or more!) ways: 1) group status values before aggregating or 2) filter status values after aggregating. We can use the eval and appendpipe commands to group status values and calculate total requests over _time: index=web sourcetype=access_common | eval status=if(status>=400, status, "other") | bin _time span=1m | stats count by _time status | appendpipe [| stats sum(count) as count by _time | eval status="total requests" ] | where status!="other" | xyseries _time status count | fillnull value=0 We can use the untable command to filter status values after aggregating: index=web sourcetype=access_common | timechart limit=0 span=1h usenull=false count by status | addtotals fieldname="total requests" | untable _time status count | where status>=400 OR status=="total requests" | xyseries _time status count | makecontinuous _time | fillnull value=0 The makecontinuous command isn't required following the timechart command, but it's included here as a failsafe. As @gcusello  and @PickleRick noted, we use the Line Chart visualization to produce a run chart and then save the visualization to a new or existing dashboard. We can also create the Line Chart directly using the classic and Dashboard Studio editors. All of the examples used above produce a chart similar to the following (shown in log scale): In the Simple XML Line Chart visualization, the points on the lines and the legend entries are drill-down targets. We can access the status value using the click.name2 token and the context-sensitive time range using the earliest and latest tokens: <form version="1.1" theme="light"> <label>Drilldown Example</label> <fieldset submitButton="false"> <input type="time" token="time_tok" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Volumes</title> <chart> <search> <query>index=web sourcetype=access_common | timechart limit=0 span=1h usenull=false count by status | addtotals fieldname="total requests" | untable _time status count | where status>=400 OR status=="total requests" | xyseries _time status count | makecontinuous _time | fillnull value=0</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisTitleX.text">t</option> <option name="charting.axisTitleY.text">#</option> <option name="charting.axisY.scale">log</option> <option name="charting.chart">line</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_blank">search?q=index%3Dweb%20sourcetype%3Daccess_common%20status%3D%22$click.name2$%22&amp;earliest=$earliest$&amp;latest=$latest$</link> </drilldown> </chart> </panel> </row> </form> Clicking the "total requests" series will generate the following search, which will return no results: index=web sourcetype=access_common status="total requests" We can handle this case using condition elements within the drilldown element: <drilldown> <condition match="$click.name2$ == &quot;total requests&quot;"> <link target="_blank">search?q=index%3Dweb%20sourcetype%3Daccess_common%20status%3D*&amp;earliest=$earliest$&amp;latest=$latest$</link> </condition> <condition match="$click.name2$ != &quot;total requests&quot;"> <link target="_blank">search?q=index%3Dweb%20sourcetype%3Daccess_common%20status%3D%22$click.name2$%22&amp;earliest=$earliest$&amp;latest=$latest$</link> </condition> </drilldown> Clicking the "total requests" series will now generate the following search: index=web sourcetype=access_common status=* Note that using a custom link target allows us to use any of the example searches shown above assuming the underlying events share the same index and source type. Data models, tags, default search indexes, and other configuration techniques can help standardize or simplify our approach to searching data. Dashboard Studio (not shown) provides similar functionality; however, conditional drilldowns to custom URLs are not supported. (Thank you for coming to my TED Talk. Disclaimer: Not a TED Talk. I hope this was helpful! I write these responses as an exercise for myself as well.)
Assuming you have your table filled with data on a per-message level, you could use stats to get the count per sender and/or connecting IP. Let's assume each row on your table has a valid connecting... See more...
Assuming you have your table filled with data on a per-message level, you could use stats to get the count per sender and/or connecting IP. Let's assume each row on your table has a valid connectingIP but not always a sender email address, then give the sender email a fillnull value to make the stats work for rows that don't have a Sender value:   <yoursearch> | fillnull Sender value="(no sender)" | stats count by Sender ConnectingIp | lookup dnslookup clientip as ConnectingIp output clienthost as ReverseLookup  
I think so. I considered permissions and tried it in a lab setting as the admin user and it was the same result.   Besides, it works fine when the same user does it manually so I wouldn't have thou... See more...
I think so. I considered permissions and tried it in a lab setting as the admin user and it was the same result.   Besides, it works fine when the same user does it manually so I wouldn't have thought it would be any different for an automated one
It would be easier for us to help you if you could share some anonymised sample events in a code block (using the </> button above)
Hello, I have the following query to search Proofpoint logs.  index=ppoint_prod host=*host1* | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r... See more...
Hello, I have the following query to search Proofpoint logs.  index=ppoint_prod host=*host1* | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)" | stats first(time) as Date first(ip) as ConnectingIP first(reverse) as ReverseLookup last(action) last(msgs) as MessagesSent count(receiver) as NumberOfMessageRecipients first(size) as MessageSize1 first(attachments) as NumberOfAttachments values(sender) as Sender values(receiver) as Recipients first(subject) as Subject by s | where Sender!="" It provides what I need on a per message level. How would I modify this to get list of ConnectingIP and ReverseLookup values per Sender. If possible it would be nice to also get number of messages per sender, but it is not absolutely neccessarry. I understand I will need to drop from the query everything  that is message specific like Subject, NumberOfAttachments etc. I am looking to get something like this:   sender1@domain.com ConnectingIP_1 ReverseLookup_1   ConnectingIP_2 ReverseLookup_2 sender2@domain.com ConnectingIP_3 ReverseLookup_3                                                   
Thanks a lot Marnall. That worked. 
Yes. For different input stanzas - sure. But you can't - for example - have multiple apps defining multiple meta entries (like one for the environment the forwarder is in and another for the OS or te... See more...
Yes. For different input stanzas - sure. But you can't - for example - have multiple apps defining multiple meta entries (like one for the environment the forwarder is in and another for the OS or team responsible or whatever) for the same input.
In terms of pathing to the config file, you can think of _cluster as an app. So you can do: master-apps/_cluster/local -or- master-apps/<yourapp>/local   But it makes no sense to have both: mas... See more...
In terms of pathing to the config file, you can think of _cluster as an app. So you can do: master-apps/_cluster/local -or- master-apps/<yourapp>/local   But it makes no sense to have both: master-apps/_cluster/<yourapp>/local   ref: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Updatepeerconfigurations   Also one other thing came to mind: You are using the old name "master-apps" rather than the new name "manager-apps". This is fine as long as all your apps are placed either in master-apps or manager-apps, but your apps should not be in both folders.
Kindly, let me know why you need to skip _cluster path?
Yes, I know the WebUI should be disabled for the indexers, but it's test environment so it's enabled.
You can have separate _meta entries for different input stanzas. If you have two heavy forwarders handing different inputs then this should be doable. I've not tried it in a generalized input stanza ... See more...
You can have separate _meta entries for different input stanzas. If you have two heavy forwarders handing different inputs then this should be doable. I've not tried it in a generalized input stanza but if the number of input stanzas are low then it is feasible to add _meta entries for each.
What do you mean by "we go to the Data inputs => HTTP Event Collector at indexer Side"? You have WebUI enabled on your clustered indexers? It's a big no-no.
Are you sure the user for which the search is scheduled has appropriate capabilities to run mcollect and access to the destination index?
Unfortunately, there is just one "instance" of _meta entry in the whole config. So you can't "merge" separate _meta settings - one will overwrite another. That's why TRANSFORMS is a better approach. ... See more...
Unfortunately, there is just one "instance" of _meta entry in the whole config. So you can't "merge" separate _meta settings - one will overwrite another. That's why TRANSFORMS is a better approach. I'm also not sure what _meta will do on the splunktcp input especially when handling an input stream already containing metadata fields.
Can you post how your _meta field was configured? It should be in inputs.conf and have the format: _meta = fieldname::fieldvalue So if you have two heavy forwarders, one can have an input with: ... See more...
Can you post how your _meta field was configured? It should be in inputs.conf and have the format: _meta = fieldname::fieldvalue So if you have two heavy forwarders, one can have an input with: _meta = meta_hfnum::1 and the other: _meta = meta_hfnum::2  
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics... See more...
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics every X minutes but that doesn't seem to be working, the metrics are only collected when I run the search manually.   Any suggestions to how I can make mcollect just automatically collect the metrics I am looking for ?   Thanks
You shouldn't need to put inputs.conf into master-apps/_cluster/http_input/local, it should either go into master-apps/_cluster/local or master-apps/http_input/local . Try moving it into _cluster/loc... See more...
You shouldn't need to put inputs.conf into master-apps/_cluster/http_input/local, it should either go into master-apps/_cluster/local or master-apps/http_input/local . Try moving it into _cluster/local or http_input/local.
To use the API to create access tokens, you need to use the management port (8089) not the web interface port (8000). You also need to remove the localization (en-US) part of your path. It should be... See more...
To use the API to create access tokens, you need to use the management port (8089) not the web interface port (8000). You also need to remove the localization (en-US) part of your path. It should be: curl -k -u admin:Password -X POST http://127.0.0.1:8089/services/authorization/tokens?output_mode=json --data name=admin --data audience=Users --data-urlencode expires_on=+30d I also suggest using https:// instead of http:// . You don't want your token to be visible in plaintext over the network.