All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk is not good at finding things that aren't there - essentially, you would have to provide a list of all the servers you expect to find and discount all those that you do find, leaving you a lis... See more...
Splunk is not good at finding things that aren't there - essentially, you would have to provide a list of all the servers you expect to find and discount all those that you do find, leaving you a list of servers which haven't been found.
In splunk terminolgy it's not called "query" but "search". Anyway, it's a common question how to "find" something that's not there. See https://www.duanewaddle.com/proving-a-negative/
Hey All,  I’m a splunk beginner I'm looking to create a query that to be used  as an alert, specifically to identify servers not in the _inventory – those not being monitored by Splunk. If anyone co... See more...
Hey All,  I’m a splunk beginner I'm looking to create a query that to be used  as an alert, specifically to identify servers not in the _inventory – those not being monitored by Splunk. If anyone could share insights, examples Thank You
No. Splunk has no concept of fields in index time (apart from indexed fields). And even if you managed to extract all files in index time (which is not achievable with xml logs since there are no xml... See more...
No. Splunk has no concept of fields in index time (apart from indexed fields). And even if you managed to extract all files in index time (which is not achievable with xml logs since there are no xml functions working in index time) I can think of no way to wildcard fields for creating a json out of them (you can't expect all windows events to have the same field set ;-)).
These are two separate mechanisms. Powershell has some features that script input doesn't (most important being the ability to receive powershell objects, not just text).
Netflow is for flow reporting. You need Splunk Stream https://docs.splunk.com/Documentation/StreamApp/latest/DeployStreamApp/AboutSplunkStream
I am trying to send Cisco SD-WAN router logs to Splunk Cloud. I have installed Universal forwarder on the log server running syslog-ng and am able to forward text-based logs. However, the FW logs are... See more...
I am trying to send Cisco SD-WAN router logs to Splunk Cloud. I have installed Universal forwarder on the log server running syslog-ng and am able to forward text-based logs. However, the FW logs are output in HSL, and  it's in netflow ver.9 format. How can I get this type of data in Splunk Cloud ?
Hi, Is it possible for someone to aid me in reformatting the given events to align with the structure present in blacklist3, organizing them into their respective blacklists or potentially amalgamat... See more...
Hi, Is it possible for someone to aid me in reformatting the given events to align with the structure present in blacklist3, organizing them into their respective blacklists or potentially amalgamating them into a unified blacklist? blacklist3 = $XmlRegex="<EventID>4688<\/EventID>.*<Data Name=('NewProcessName'|'ParentProcessName')>[C-F]:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi))\.exe" Tanium  Events: C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\Tools\\StdUtils\\TaniumExecWrapper\.exe| C:\\Program Files (\x86\)\\Tanium\\Tanium Client\\Patch\\tools\\TaniumExecWrapper\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\Patch\\tools\\TaniumFileInfo\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumCX\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\python38\\TPython\.exe| C:\Program Files (x86)\Tanium\Tanium Client\Tools\Patch\7za.exe Windows defender: C:\Program Files\Windows Defender Advanced Threat Protection\MsSense.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseIR.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseCM.exe C:\ProgramData\Microsoft\Windows Defender\Platform\.*\MpCmdRun.exe C:\ProgramData\Microsoft\Windows Defender\Platform\.*\MsMpEng.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\DataCollection\.*\OpenHandleCollector.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseNdr.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\SenseCM.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\SenseIR.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\MsSense.exe C:\Program Files\Windows Defender\MpCmdRun.exe C:\Program Files\Windows Defender\MsMpEng.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseTVM.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\10.8560.25364.1036\SenseTVM.exe Rapid7 ParentProcessName count C:\Program Files\Rapid7\Insight Agent\components\insight_agent\3.2.4.63\ir_agent.exe C:\Program Files\Rapid7\Insight Agent\components\insight_agent\4.0.0.1\ir_agent.exe C:\Program Files\Rapid7\Insight Agent\ir_agent.exe C:\\Program Files\\Rapid7\\Insight Agent\\components\\insight_agent\\.*\\get_proxy\.exe| Azure: C:\Program Files\AzureConnectedMachineAgent\ExtensionService\GC\gc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_arc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_worker.exe C:\Program Files\AzureConnectedMachineAgent\azcmagent.exe Gytpol: C:\\Program Files\\WindowsPowerShell\\Modules\\gytpol\\Client\\fw.*\\GytpolClientFW.*\.exe| forescout: ParentProcessName count C:\Program Files\ForeScout SecureConnector\SecureConnector.exe     Thanks//..
Hi @aditsss , please try this regex: | rex ".*\]\s*(?<msg>[^:]+)" that you can test at https://regex101.com/r/7yLHPr/1 Ciao. Giuseppe 
Hi @DaveBunn , you have the same problem taking events from a data source: if you don't have events, you don't have any count in the stats. At first you have to define what you want to monitor: hos... See more...
Hi @DaveBunn , you have the same problem taking events from a data source: if you don't have events, you don't have any count in the stats. At first you have to define what you want to monitor: hosts, indexes, sourcetypes. Then, defined what to monitor (e.g. sourcetypes), you have to create anothe lookup (called e.g. perimeter.csv) containing all the values of the field to monitor at least in one column (e.g. sourcetype). then you could run something like this: | inputlookup TA_feeds.csv ! stats count BY sourcetype | append [ | inputlookup perimeter.csv | eval count=0 | fields sourcetype count ] | stats sum(count) AS total BY sourcetype | eval status=if(total=0,"Not Present","Present") | table sourcetype status In this way, you have a table containing all the sourcetypes to monitor and the information if there are rows for each sourcetype or not. Ciao. Giuseppe
Hi @QuantumRgw , so you have to install an UF on your pcs and manage them using a Deployment Server ad descripted at https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Introducti... See more...
Hi @QuantumRgw , so you have to install an UF on your pcs and manage them using a Deployment Server ad descripted at https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Introduction_to_the_Splunk_Distributed_Deployment_Server_(SDDS)#:~:text=The%20Splunk%20deployment%20server%20is,Enterprise%20and%20Universal%20Forwarder%20instances.  Then you have to define your perimeter, in terms of hosts to monitor and, for each host, which logs to index. Than having the above information, you have to define your Use Cases, e.g. monitoring of administrator accesses, presence of not updated patches, presence of malicious known packets, etc... Ciao. Giuseppe
Hi, Could anyone pls help me to conver this Blacklist to xml regex ? blacklist1 = EventCode="4662" Message="Object Type:(?!\s*(groupPolicyContainer|computer|user))" Thanks..  
I have a lookup file called TA_feeds.csv with six columns labeled below with multiple rows similar to the one below. index | sourcetype | source | period | App | Input Azure | mscs:Azure:VirtualMa... See more...
I have a lookup file called TA_feeds.csv with six columns labeled below with multiple rows similar to the one below. index | sourcetype | source | period | App | Input Azure | mscs:Azure:VirtualMachines | /subscription/1111-2222-3333-4444/* | 42300 | SPlunk_Cloud | AZ_VM_Feeds AD | Azure:Signin | main_tenant | 360 | Azure_App | AD_SignIn   I use the SPL [| inputlookup TA_feeds.csv | eval earliest=0-period."s" | fields index sourcetype source earliest | format] | stats count by index sourcetype source which iterates through the the lookup, and searches the relevant indexes for the data  one row at a time and generates a count for each input type. The problem is - if a row in the lookup does not generate any data - then there is not an entry in the stats. What I need is to be able to show if a feed is zero -i.e. | search count=0 But can't figure out how to generate the zero entries    
Using the mvexpand command twice breaks any association between the values.  Instead, combine the fields, use mvexpand, then break them apart again. | eval logMsgTimestampInit = logMsgTimestamp | ev... See more...
Using the mvexpand command twice breaks any association between the values.  Instead, combine the fields, use mvexpand, then break them apart again. | eval logMsgTimestampInit = logMsgTimestamp | eval ID_SERVICE= mvappend(ID_SERVICE_1,ID_SERVICE_2) , TYPE= mvappend(TYPE1,TYPE2) | eval pair = mvzip(ID_SERVICE, TYPE) | mvexpand pair | eval ID_SERVICE = mvindex(pair,0), TYPE = mvindex(pair, 1) | table ID_SERVICE TYPE  
Thanks,  It does tho I'm going to have to take a closer look at the reports.  They at least appear to have the same owner nobody and run as Owner.  Also the Read/Write permissions are the same.    
The user context is the name of account under which the job runs.  In most cases, it's the name of user running the search, but some scheduled searches can be set to run as the owner.  In the specifi... See more...
The user context is the name of account under which the job runs.  In most cases, it's the name of user running the search, but some scheduled searches can be set to run as the owner.  In the specific case of user context = splunk-system-user, that is the name used when a search has no owner (owned by "nobody").
Hi @aditsss  Please check this:   | makeresults | eval _raw="[AssociationRemoteProcessor] Exception while running association: javax" | rex field=_raw "\]\s(?<rexField>.*)\:" | table _raw rexField... See more...
Hi @aditsss  Please check this:   | makeresults | eval _raw="[AssociationRemoteProcessor] Exception while running association: javax" | rex field=_raw "\]\s(?<rexField>.*)\:" | table _raw rexField   this rex produces this output: _raw rexField [AssociationRemoteProcessor] Exception while running association: javax Exception while running association
It would help to know what you've tried so far, but perhaps this will help. | rex "] (?<field>.*?):"
Hi isoutamo, sorry for the dumb question, but I have to put only MN in maintenance mode or also the other nodes (except SH)? Do I have also to stop Splunk manually or it is automatically stopped du... See more...
Hi isoutamo, sorry for the dumb question, but I have to put only MN in maintenance mode or also the other nodes (except SH)? Do I have also to stop Splunk manually or it is automatically stopped during the OS shutdown?   Thank you, Andrea  
        | eval logMsgTimestampInit = logMsgTimestamp | eval ID_SERVICE= mvappend(ID_SERVICE_1,ID_SERVICE_2) , TYPE= mvappend(TYPE1,TYPE2) | table ID_SERVICE TYPE         ID_SERVICE TYPE ... See more...
        | eval logMsgTimestampInit = logMsgTimestamp | eval ID_SERVICE= mvappend(ID_SERVICE_1,ID_SERVICE_2) , TYPE= mvappend(TYPE1,TYPE2) | table ID_SERVICE TYPE         ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00 afg567 hydraulic_433         cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Hi Dears, I have the following issue, exists some cells (like in red) that is appearing with 02 values  per cell, like the column ID_SERVICE, this is why the payload is containing 02 service id in the same message.   What I need? I need split this cells everytime it occurs , I tried to use mvexpand but unfortunately it causes a mess in the table. When I try to use mvexpand it duplicates the rows and for each value in the first colum creates  another row         ... | query_search | mvexpand ID_SERVICE | mvexpand TYPE | table ID_SERVICE TYPE TIME         ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00 asd232 hydraulic_433 2023-12-01 08:45:00        afg567 mechanic_234 2023-12-01 08:45:00 afg567 hydraulic_433 2023-12-01 08:45:00        cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Since the 01 row (in red) shares the same timestamp (TIME colum) I would like to split every value in a row and copy the same timestamp for both values and the desired output is like follows below: ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00       afg567 hydraulic_433 2023-12-01 08:45:00       cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Please, help me.