All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I've tried identifying all individual fields in events and extracted using rex.   | rex "\s\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>" | rex "\s\<externalFactorReturn\>(?<externa... See more...
I've tried identifying all individual fields in events and extracted using rex.   | rex "\s\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>" | rex "\s\<externalFactorReturn\>(?<externalFactorReturn>.*)\<\/externalFactorReturn\>" | rex "\<current\>(?<current>.*)\<\/current\>" | rex "\<encrypted\>(?<encrypted>.*)\<\/encrypted\>" | rex "\<keywordp\>(?<keywordp>.*)\<\/keywordp\>" | rex "\<pepres\>(?<pepres>.*)\<\/pepres\>" | rex "\<roleName\>(?<roleName>.*)\<\/roleName\>" | rex "\<boriskhan\>(?<boriskhan>.*)\<\/boriskhan\>" | rex "\<sload\>(?<sload>.*)\<\/sload\>" | rex "\<externalFactor\>(?<externalFactor>.*)\<\/externalFactor\>" | rex "\<parkeristrator\>(?<parkeristrator>.*)\<\/parkeristrator\>"  
I have snmp_ta running 1.8.0 for a few years and have many snmp polls, both with and with out mibs. I am doing a walk, probably the first time.  I get successes but then I get a bunch of these: ERR... See more...
I have snmp_ta running 1.8.0 for a few years and have many snmp polls, both with and with out mibs. I am doing a walk, probably the first time.  I get successes but then I get a bunch of these: ERROR Exception resolving MIBs for table: 'MibTable' object has no attribute 'getIndicesFromInstId' stanza:snmp and then I get two of these ERROR Exception resolving MIBs for table: list modified during sort stanza:snmp and no more polling of this config. There are several IPs in this config.  All works well for a 2-4 polling cycles, then it stops. I can do snmpwalk -v 2c -c $PUBLIC -m $MIB and I get good results. I did recently install a new MIB for this device.  The old mib has the same issue and the other configs works fine regardless. I am thinking it is related to snmpwalk, but I am having little success in solutions. -- Frank  
Working as expected, thank you  
Try eventstats instead of stats if you want to keep the original events
Please check this addon: https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/About Documentation says is CIM compatible: "The Splunk Add-on for Sysmon allows a Splunk software administr... See more...
Please check this addon: https://docs.splunk.com/Documentation/AddOns/released/MSSysmon/About Documentation says is CIM compatible: "The Splunk Add-on for Sysmon allows a Splunk software administrator to create a Splunk software data input and CIM-compliant field extractions for Microsoft Sysmon." If the addon feeds the Endpoint.Processes datamodel, then the use case you are interested in might work.  
like? Could you please provide an example? If I will use stats it will merge the 4 events into 1 or not fill the empty ones / document type The main key fields are document_number and document_type... See more...
like? Could you please provide an example? If I will use stats it will merge the 4 events into 1 or not fill the empty ones / document type The main key fields are document_number and document_type which are required further. So with: | stats max(timestamp1) as timestamp1, max(timestamp2) as timestamp2, ... by document_number will unify the events by document_number which is not what I would like to achieve as there are many other fields required, which are not shown in the example. | stats max(timestamp1) as timestamp1, max(timestamp2) as timestamp2, .. by document_number, document_type will do nothing as will select the event from itself and leave the empty fields empty. P.S.: sorry I forgot to add the datetime_type to the example pictures, will add them.
Is there a way to detect subsearch limits being exceeded in scheduled searches I notice that you can get this info from REST: | rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search... See more...
Is there a way to detect subsearch limits being exceeded in scheduled searches I notice that you can get this info from REST: | rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search_id$ | where isnotnull('messages.error') | fields id savedsearch_name, app, user, executed_at, search, messages.* And you can kinda join this to the _audit query: index=_audit action=search (has_error_warn=true OR fully_completed_search=false OR info="bad_request") | eval savedsearch_name = if(savedsearch_name="", "Ad-hoc", savedsearch_name) | eval search_id = trim(search_id, "'") | eval search = mvindex(search, 0) | map search="| rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search_id$ | where isnotnull('messages.error') | fields id savedsearch_name, app, user, executed_at, search, messages.*" But it doesn't really work - I get lots of rest failures reported and the output is bad. You also need to run it when the search artifacts are present. Although my plan was to run this frequently and push the result to a summary index. Has anyone had better success with this? One thought would be to ingest the data that is returned by the rest call (I presume var/run/dispatch). Or might debug level logging help?
I use a PowerShell script in a Splunk forwarder that sends data with Write-Output $line Splunk receives this data in the _raw field.   How shall a PowerShell script write key-value pairs, so t... See more...
I use a PowerShell script in a Splunk forwarder that sends data with Write-Output $line Splunk receives this data in the _raw field.   How shall a PowerShell script write key-value pairs, so that for Splunk there are separate keys and values instead of _raw?
Scheduled report: index=_internal source="*scheduler.log" log_level=ERROR | eval user=mvindex(split(savedsearch_id, ";"), 0) | eval app=mvindex(split(savedsearch_id, ";"), 1) | eval search=mvindex(... See more...
Scheduled report: index=_internal source="*scheduler.log" log_level=ERROR | eval user=mvindex(split(savedsearch_id, ";"), 0) | eval app=mvindex(split(savedsearch_id, ";"), 1) | eval search=mvindex(split(savedsearch_id, ";"), 2) | stats count by user, app , search, message Failed search: index=_audit action=search has_error_warn=true fully_completed_search=false  
Why not just use stats (instead of streamstats)?
Try removing the line - hour should already be coming through from the summary index
| makeresults format=csv data="event,id A,1 B,2 C,4 D,5" | streamstats range(id) as range window=2 | eval range=coalesce(range, id)
Yes. This is the one. In results, just 00 is being listed in hour column.  how can this be resolved to achieve results similar to main index ? 
Thank you for the reply. No, I'm not talking about dashboards at all. I want to do this within a search itself without having to use dashboards and tokens etc. I guess I'm kind of looking for a simil... See more...
Thank you for the reply. No, I'm not talking about dashboards at all. I want to do this within a search itself without having to use dashboards and tokens etc. I guess I'm kind of looking for a similar functionality that you have in dashboards, but within search itself. 
I'm not sure where this goes, can you please explain what it changes?
Hi, I have the following issue: Have many events with different document_number+datetime_type, which have a field (started_on). There is always 4 different types / document_number. Then 4 new tim... See more...
Hi, I have the following issue: Have many events with different document_number+datetime_type, which have a field (started_on). There is always 4 different types / document_number. Then 4 new timestamp fields are evaluated by the type and the timestamp, so each event will have 1 new filled timestamp in a different field. Now I need to fill the empty ones from the evaluated ones for the same document_number. With streamstats I was able to fill them further (after found), but not backwards. Is it possible somehow? Or only if I do | reverse and apply streamstats again?
Hi @ITWhisperer     I'm just passing the token in link $office_filter$ <link target="_blank">/app/SAsh/details?form.compliance_filter=$click.value$&amp;form.timerange=$timerange$&amp;form.an... See more...
Hi @ITWhisperer     I'm just passing the token in link $office_filter$ <link target="_blank">/app/SAsh/details?form.compliance_filter=$click.value$&amp;form.timerange=$timerange$&amp;form.antivirus_filter=*&amp;$office_filter$&amp;form.machine=$machine$&amp;form.origin=$origin$&amp;form.scope=$scope$</link>
This is not working, the second search has one field StatusDescription, i want to add this using common field Name and host in 1st search 1st search: ```Table on Dashboard = M3_PROD_splunk__age... See more...
This is not working, the second search has one field StatusDescription, i want to add this using common field Name and host in 1st search 1st search: ```Table on Dashboard = M3_PROD_splunk__agent__universal_forwarder_status_is_down``` index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections os=Windows | dedup hostname | eval age=(now()-_time) | eval LastActiveTime=strftime(_time,"%y/%m/%d %H:%M:%S") | eval Status=if(age< 3600,"Running","DOWN") | rename age AS Age | eval Age=tostring(Age,"duration") | lookup 0010_Solarwinds_Nodes_Export Caption as hostname OUTPUT Application_Primary_Support_Group AS CMDB2_Application_Primary_Support_Group, Application_Primary AS CMDB2_Application_Primary, Support_Group AS CMDB2_Support_Group NodeID AS SW2_NodeID Enriched_SW AS Enriched_SW2 Environment AS CMDB2_Environment | eval Assign_To_Support_Group=if(Assign_To_Support_Group_Tag="CMDB_Support_Group", CMDB2_Support_Group, CMDB2_Application_Primary_Support_Group) | where Status="DOWN" AND NOT isnull(SW2_NodeID) AND (CMDB2_Environment="Production" OR CMDB2_Environment="PRODUCTION") ```| table _time, hostname,sourceIp, Status, LastActiveTime, Age, SW2_NodeID,Assign_To_Support_Group, CMDB2_Support_Group,CMDB2_Environment``` | table _time, hostname,sourceIp, Status, LastActiveTime, Age, Assign_To_Support_Group, CMDB2_Environment 2nd search : index=index_name sourcetype="nodes" | lookup lookupfile1 Name OUTPUTNEW | dedup Caption | table Caption StatusDescription UnManaged UnManageFrom UnManageUntil | search UnManaged=true | eval UnManageUntil = strftime(strptime(UnManageUntil, "%Y-%m-%dT%H:%M:%S.%QZ"), "%Y-%m-%d %H:%M:%S") | eval UnManageFrom = strftime(strptime(UnManageFrom, "%Y-%m-%dT%H:%M:%S.%QZ"), "%Y-%m-%d %H:%M:%S") | eval UnManageUntil = coalesce(UnManageUntil, "NOT SET") ```replaces any null values in the "UnManageUntil" field with NOT SET``` | sort -UnManageFrom ```sorts the events in descending order based on the "UnManageFrom" field```
Try using sed.  | rex mode=sed "s/rawjson=\\\"//"
Hi @isoutamo, The current count is under 150. Thank you