All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields ... See more...
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields having static values. What attribute should I use in props and transform so that I can get both these files? Servername="mobiwick" ServerIP ="10.30.xx.56.78"   Sample raw data : <?xml version="1.0" encoding="utf-8"?><StaLogMessage original_root="ToLogMessage"><MessageId>6cad0986-d4b2-45e2-b5b1-e6a1af3c6d40</MessageId><MessageTimeStamp>2024-11-24T07:00:00.1115119Z</MessageTimeStamp><SenderFmInstanceName>TOP/Top</SenderFmInstanceName><ReceiverFmInstanceName>BPI/Bpi</ReceiverFmInstanceName><StatisticalElement><StatisticalSubject><MainSubjectId>NICKER</MainSubjectId><SubjectId>Prodtion</SubjectId><SubjectType>PLAN</SubjectType></StatisticalSubject><StatisticalItem><StatisticalId>8</StatisticalId><Period><TimePeriodEnd>2024-11-24T07:00:00Z</TimePeriodEnd><TimePeriodStart>2024-11-24T06:00:00Z</TimePeriodStart></Period><Value>0</Value></StatisticalItem></StatisticalElement></SogMessage>
Hi, I posted sample log entries. I am not sure how readable this is.
Nov 24 15:01:43 pphost.company.com 2024-11-24T04:01:43.100466+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv mod=session cmd=disconnect module= rule= action= helo=sendinghost msgs=1 rcpts=2 r... See more...
Nov 24 15:01:43 pphost.company.com 2024-11-24T04:01:43.100466+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv mod=session cmd=disconnect module= rule= action= helo=sendinghost msgs=1 rcpts=2 routes=allow_relay,default_inbound,internalnet duration=0.128 elapsed=100 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.614350+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=msg module= rule= action=continue attachments=0 rcpts=2 routes=allow_relay,default_inbound,internalnet size=5441 guid=jAIwVNBFVxC8EycWPq7c1MicIX5v1om5 hdr_mid=<42y9nt2euv-1@pphost.company.com> qid=4AO403EC022673 hops-ip=10.20.30.40 subject="MESSAGE SUBJECT" duration=0.125 elapsed=0.127 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.614025+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 cmd=send profile=mail qid=4AO403EC022673 rcpts=RECIPIENT1@company.com,RECIPIENT2@company.com Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.505939+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=judge module=none rule=none Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.505617+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=dmarc cmd=run arc_result=none arc_result_detail=none arc_trusted_flag=0 arc_override=0 dmarc_detail="nothing to see here" dmarc_record=none dmarcverified= final_dmarc_result=none orig_dmarc_result=none auth_result=none original_auth_result= dyndmarc_override_id= dmarcoverride_type= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.505346+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=dkimv cmd=run rule=none dkimresult=none spfheaderfromresult=none duration=0.000 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.505036+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=spf cmd=run cmd=eob result=none Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.504665+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=attachment id=0 file=text.html mime=text/html type=html omime=text/html oext=html corrupted=0 protected=0 size=3550 virtual=0 sha256=11dbefae8a521d127ef990b45e998cae68184d56a3d657ee6661f11a8b048d85 a=0 duration=0.000 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.499983+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mimelint cmd=getlint warn=0 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.499957+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mimelint cmd=getlint mime=1 score=0 threshold=100 duration=0.000 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.499913+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mimelint cmd=getlint lint= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.497214+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=headers hfrom=noreply@company.com routes= notroutes=* Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.495249+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=data rcpt_routes=default_inbound rcpt_notroutes=journal data_routes= data_notroutes= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.494969+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=data rcpt=recipeint2@company.com suborg= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.494950+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=data rcpt=recipient1@company.com suborg= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.494892+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=session cmd=data from=noreply@company.com suborg= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.489936+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=env_rcpt r=2 value=recipient2@company.com orcpt=RECIPIENT2@company.com verified= routes=default_inbound notroutes=journal Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.488974+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=env_rcpt r=1 value=recipient1@company.com orcpt=RECIPIENT1@company.com verified= routes=default_inbound notroutes=journal Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.487458+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv m=1 x=42y9nt2euv-1 mod=mail cmd=env_from value=noreply@company.com ofrom=NoReply@company.com size= smtputf8= qid=42y9nt2euv-1 tls= routes= notroutes=tls_fallback host=sendinghost.company.com ip=10.20.30.40 Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.486235+00:00 pphost filter_instance1[1523]: info s=42y9nt2euv mod=mail cmd=helo value=sendinghost extended=1 routes= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.484673+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv mod=session cmd=resolve host=sendinghost.company.com resolve=ok reverse=sendinghost.company.com routes=allow_relay notroutes= Nov 24 15:00:03 pphost.company.com 2024-11-24T04:00:03.065376+00:00 pphost filter_instance1[1523]: rprt s=42y9nt2euv mod=session cmd=connect ip=10.20.30.40 country=** lip=50.60.70.80 prot=smtp:smtp hops_active=f routes=internalnet notroutes=firewallsafe,outbound,pp_spoofsafe,spfsafe,tls,xclient_trusted perlwait=0.002
Thank you marnall. I will try this approach and report back.
If this is the only thing modifying your metrics index you could verify whether the data is not mcollected at all or just "mistimed". Run | mstats count(*) where index=<your_metrics_index> | trans... See more...
If this is the only thing modifying your metrics index you could verify whether the data is not mcollected at all or just "mistimed". Run | mstats count(*) where index=<your_metrics_index> | transpose 0 | stats sum("row 1") as total over all-time before and after the scheduled search runs and verify the counts
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single... See more...
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single instance on Splunk Cloud. However, when it is installed on a cluster it does not work properly. The add-on when installed is supposed to create some CSVs files and store those in the add. However, when it is installed on a cluster splunk environment, it suddenly will not create the CSVs file and just do not download the files it was supposed to download. Any help or advise is welcome please. This is the add-on below. https://classic.splunkbase.splunk.com/app/7002/#/overview
Hi @anna, The chart, timechart, xyseries, and tstats commands all produce output suitable for a run chart, depending on our source events. The chart and timechart commands bin _time into spans auto... See more...
Hi @anna, The chart, timechart, xyseries, and tstats commands all produce output suitable for a run chart, depending on our source events. The chart and timechart commands bin _time into spans automatically or using the span size we specify: index=web sourcetype=access_common | chart limit=0 usenull=false count over _time span=1h by status index=web sourcetype=access_common | timechart limit=0 span=1h usenull=false count by status Note that the timechart command pads the results with empty bins spanning our search time range. To remove extraneous bins, set the fixedrange argument to false: | timechart fixedrange=false limit=0 span=1h usenull=false count by status The xyseries command first requires binning _time using the bin command and then aggregating status values using the stats command. Like the chart and timechart commands, the bin command bins the target field into spans automatically or using the span size we specify; however, the stats command does not generate empty bins. We use the makecontinuous command after the xyseries command to add missing _time bins: index=web sourcetype=access_common | bin _time span=1h | stats count by _time status | xyseries _time status count | makecontinuous _time | fillnull value=0 The tstats command is similar to the stats command but works with indexed fields or terms and data models. We can pipe the output through either the timechart command or the xyseries command to group events by status over _time. | tstats prestats=true count from datamodel=Web.Web by _time Web.status | rename Web.* as * | timechart limit=0 span=1h usenull=false count by status Like the chart and timechart commands, the tstats command normally bins _time into spans automatically or using the span size we specify; however, when using a datamodel without the prestats argument, specify a span: | tstats count from datamodel=Web.Web by _time span=1h Web.status | rename Web.* as * | xyseries _time status count | makecontinuous _time | fillnull value=0 If the status field is indexed, the tstats command can reference it directly: | tstats prestats=true count where index=web sourcetype=iis by _time sc_status | rename sc_status as status | timechart limit=0 span=1h usenull=false count by status If the status field and value appear in _raw as a key-value pair not separated by a major breaker, we can use the PREFIX() directive to access the status value as if it were indexed. Given events like the following: Nov 23 12:00:00 my_server my_app[1234]: c_ip=192.0.2.1 request="GET /favicon.ico" status=500 bytes=1024 we can execute: | tstats prestats=true count where index=web sourcetype=access_common by _time PREFIX(status=) | rename status= as status | timechart limit=0 span=1h usenull=false count by status The prestats argument in these examples instructs the tstats command to produces results suitable for the chart, stats, and timechart commands. The prestats argument is not required, but it allows subsequent commands to work as they would following other generating commands. The most common generating command is the search command, which we use implicitly in every search that doesn't begin with a pipe. We can find more information on breakers, the tstats command, and the PREFIX() directive in the Splunk documentation. Irrespective of the method used to count events, we can add a total field by piping the results through the addtotals command: index=web sourcetype=access_common | timechart limit=0 span=1h usenull=false count by status | addtotals fieldname="total requests" We can filter status values while retaining a total request count in two (or more!) ways: 1) group status values before aggregating or 2) filter status values after aggregating. We can use the eval and appendpipe commands to group status values and calculate total requests over _time: index=web sourcetype=access_common | eval status=if(status>=400, status, "other") | bin _time span=1m | stats count by _time status | appendpipe [| stats sum(count) as count by _time | eval status="total requests" ] | where status!="other" | xyseries _time status count | fillnull value=0 We can use the untable command to filter status values after aggregating: index=web sourcetype=access_common | timechart limit=0 span=1h usenull=false count by status | addtotals fieldname="total requests" | untable _time status count | where status>=400 OR status=="total requests" | xyseries _time status count | makecontinuous _time | fillnull value=0 The makecontinuous command isn't required following the timechart command, but it's included here as a failsafe. As @gcusello  and @PickleRick noted, we use the Line Chart visualization to produce a run chart and then save the visualization to a new or existing dashboard. We can also create the Line Chart directly using the classic and Dashboard Studio editors. All of the examples used above produce a chart similar to the following (shown in log scale): In the Simple XML Line Chart visualization, the points on the lines and the legend entries are drill-down targets. We can access the status value using the click.name2 token and the context-sensitive time range using the earliest and latest tokens: <form version="1.1" theme="light"> <label>Drilldown Example</label> <fieldset submitButton="false"> <input type="time" token="time_tok" searchWhenChanged="true"> <label>Time</label> <default> <earliest>-60m@m</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Volumes</title> <chart> <search> <query>index=web sourcetype=access_common | timechart limit=0 span=1h usenull=false count by status | addtotals fieldname="total requests" | untable _time status count | where status>=400 OR status=="total requests" | xyseries _time status count | makecontinuous _time | fillnull value=0</query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisTitleX.text">t</option> <option name="charting.axisTitleY.text">#</option> <option name="charting.axisY.scale">log</option> <option name="charting.chart">line</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_blank">search?q=index%3Dweb%20sourcetype%3Daccess_common%20status%3D%22$click.name2$%22&amp;earliest=$earliest$&amp;latest=$latest$</link> </drilldown> </chart> </panel> </row> </form> Clicking the "total requests" series will generate the following search, which will return no results: index=web sourcetype=access_common status="total requests" We can handle this case using condition elements within the drilldown element: <drilldown> <condition match="$click.name2$ == &quot;total requests&quot;"> <link target="_blank">search?q=index%3Dweb%20sourcetype%3Daccess_common%20status%3D*&amp;earliest=$earliest$&amp;latest=$latest$</link> </condition> <condition match="$click.name2$ != &quot;total requests&quot;"> <link target="_blank">search?q=index%3Dweb%20sourcetype%3Daccess_common%20status%3D%22$click.name2$%22&amp;earliest=$earliest$&amp;latest=$latest$</link> </condition> </drilldown> Clicking the "total requests" series will now generate the following search: index=web sourcetype=access_common status=* Note that using a custom link target allows us to use any of the example searches shown above assuming the underlying events share the same index and source type. Data models, tags, default search indexes, and other configuration techniques can help standardize or simplify our approach to searching data. Dashboard Studio (not shown) provides similar functionality; however, conditional drilldowns to custom URLs are not supported. (Thank you for coming to my TED Talk. Disclaimer: Not a TED Talk. I hope this was helpful! I write these responses as an exercise for myself as well.)
Assuming you have your table filled with data on a per-message level, you could use stats to get the count per sender and/or connecting IP. Let's assume each row on your table has a valid connecting... See more...
Assuming you have your table filled with data on a per-message level, you could use stats to get the count per sender and/or connecting IP. Let's assume each row on your table has a valid connectingIP but not always a sender email address, then give the sender email a fillnull value to make the stats work for rows that don't have a Sender value:   <yoursearch> | fillnull Sender value="(no sender)" | stats count by Sender ConnectingIp | lookup dnslookup clientip as ConnectingIp output clienthost as ReverseLookup  
I think so. I considered permissions and tried it in a lab setting as the admin user and it was the same result.   Besides, it works fine when the same user does it manually so I wouldn't have thou... See more...
I think so. I considered permissions and tried it in a lab setting as the admin user and it was the same result.   Besides, it works fine when the same user does it manually so I wouldn't have thought it would be any different for an automated one
It would be easier for us to help you if you could share some anonymised sample events in a code block (using the </> button above)
Hello, I have the following query to search Proofpoint logs.  index=ppoint_prod host=*host1* | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r... See more...
Hello, I have the following query to search Proofpoint logs.  index=ppoint_prod host=*host1* | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)" | stats first(time) as Date first(ip) as ConnectingIP first(reverse) as ReverseLookup last(action) last(msgs) as MessagesSent count(receiver) as NumberOfMessageRecipients first(size) as MessageSize1 first(attachments) as NumberOfAttachments values(sender) as Sender values(receiver) as Recipients first(subject) as Subject by s | where Sender!="" It provides what I need on a per message level. How would I modify this to get list of ConnectingIP and ReverseLookup values per Sender. If possible it would be nice to also get number of messages per sender, but it is not absolutely neccessarry. I understand I will need to drop from the query everything  that is message specific like Subject, NumberOfAttachments etc. I am looking to get something like this:   sender1@domain.com ConnectingIP_1 ReverseLookup_1   ConnectingIP_2 ReverseLookup_2 sender2@domain.com ConnectingIP_3 ReverseLookup_3                                                   
Thanks a lot Marnall. That worked. 
Yes. For different input stanzas - sure. But you can't - for example - have multiple apps defining multiple meta entries (like one for the environment the forwarder is in and another for the OS or te... See more...
Yes. For different input stanzas - sure. But you can't - for example - have multiple apps defining multiple meta entries (like one for the environment the forwarder is in and another for the OS or team responsible or whatever) for the same input.
In terms of pathing to the config file, you can think of _cluster as an app. So you can do: master-apps/_cluster/local -or- master-apps/<yourapp>/local   But it makes no sense to have both: mas... See more...
In terms of pathing to the config file, you can think of _cluster as an app. So you can do: master-apps/_cluster/local -or- master-apps/<yourapp>/local   But it makes no sense to have both: master-apps/_cluster/<yourapp>/local   ref: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Updatepeerconfigurations   Also one other thing came to mind: You are using the old name "master-apps" rather than the new name "manager-apps". This is fine as long as all your apps are placed either in master-apps or manager-apps, but your apps should not be in both folders.
Kindly, let me know why you need to skip _cluster path?
Yes, I know the WebUI should be disabled for the indexers, but it's test environment so it's enabled.
You can have separate _meta entries for different input stanzas. If you have two heavy forwarders handing different inputs then this should be doable. I've not tried it in a generalized input stanza ... See more...
You can have separate _meta entries for different input stanzas. If you have two heavy forwarders handing different inputs then this should be doable. I've not tried it in a generalized input stanza but if the number of input stanzas are low then it is feasible to add _meta entries for each.
What do you mean by "we go to the Data inputs => HTTP Event Collector at indexer Side"? You have WebUI enabled on your clustered indexers? It's a big no-no.
Are you sure the user for which the search is scheduled has appropriate capabilities to run mcollect and access to the destination index?
Unfortunately, there is just one "instance" of _meta entry in the whole config. So you can't "merge" separate _meta settings - one will overwrite another. That's why TRANSFORMS is a better approach. ... See more...
Unfortunately, there is just one "instance" of _meta entry in the whole config. So you can't "merge" separate _meta settings - one will overwrite another. That's why TRANSFORMS is a better approach. I'm also not sure what _meta will do on the splunktcp input especially when handling an input stream already containing metadata fields.