All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Netflow is for flow reporting. You need Splunk Stream https://docs.splunk.com/Documentation/StreamApp/latest/DeployStreamApp/AboutSplunkStream
I am trying to send Cisco SD-WAN router logs to Splunk Cloud. I have installed Universal forwarder on the log server running syslog-ng and am able to forward text-based logs. However, the FW logs are... See more...
I am trying to send Cisco SD-WAN router logs to Splunk Cloud. I have installed Universal forwarder on the log server running syslog-ng and am able to forward text-based logs. However, the FW logs are output in HSL, and  it's in netflow ver.9 format. How can I get this type of data in Splunk Cloud ?
Hi, Is it possible for someone to aid me in reformatting the given events to align with the structure present in blacklist3, organizing them into their respective blacklists or potentially amalgamat... See more...
Hi, Is it possible for someone to aid me in reformatting the given events to align with the structure present in blacklist3, organizing them into their respective blacklists or potentially amalgamating them into a unified blacklist? blacklist3 = $XmlRegex="<EventID>4688<\/EventID>.*<Data Name=('NewProcessName'|'ParentProcessName')>[C-F]:\\Program Files\\Splunk(?:UniversalForwarder)?\\bin\\(?:btool|splunkd|splunk|splunk-(?:MonitorNoHandle|admon|netmon|perfmon|powershell|regmon|winevtlog|winhostinfo|winprintmon|wmi))\.exe" Tanium  Events: C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\Tools\\StdUtils\\TaniumExecWrapper\.exe| C:\\Program Files (\x86\)\\Tanium\\Tanium Client\\Patch\\tools\\TaniumExecWrapper\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\Patch\\tools\\TaniumFileInfo\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumCX\.exe| C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\python38\\TPython\.exe| C:\Program Files (x86)\Tanium\Tanium Client\Tools\Patch\7za.exe Windows defender: C:\Program Files\Windows Defender Advanced Threat Protection\MsSense.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseIR.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseCM.exe C:\ProgramData\Microsoft\Windows Defender\Platform\.*\MpCmdRun.exe C:\ProgramData\Microsoft\Windows Defender\Platform\.*\MsMpEng.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\DataCollection\.*\OpenHandleCollector.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseNdr.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\SenseCM.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\SenseIR.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\.*\MsSense.exe C:\Program Files\Windows Defender\MpCmdRun.exe C:\Program Files\Windows Defender\MsMpEng.exe C:\Program Files\Windows Defender Advanced Threat Protection\SenseTVM.exe C:\ProgramData\Microsoft\Windows Defender Advanced Threat Protection\Platform\10.8560.25364.1036\SenseTVM.exe Rapid7 ParentProcessName count C:\Program Files\Rapid7\Insight Agent\components\insight_agent\3.2.4.63\ir_agent.exe C:\Program Files\Rapid7\Insight Agent\components\insight_agent\4.0.0.1\ir_agent.exe C:\Program Files\Rapid7\Insight Agent\ir_agent.exe C:\\Program Files\\Rapid7\\Insight Agent\\components\\insight_agent\\.*\\get_proxy\.exe| Azure: C:\Program Files\AzureConnectedMachineAgent\ExtensionService\GC\gc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_arc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_service.exe C:\Program Files\AzureConnectedMachineAgent\GCArcService\GC\gc_worker.exe C:\Program Files\AzureConnectedMachineAgent\azcmagent.exe Gytpol: C:\\Program Files\\WindowsPowerShell\\Modules\\gytpol\\Client\\fw.*\\GytpolClientFW.*\.exe| forescout: ParentProcessName count C:\Program Files\ForeScout SecureConnector\SecureConnector.exe     Thanks//..
Hi @aditsss , please try this regex: | rex ".*\]\s*(?<msg>[^:]+)" that you can test at https://regex101.com/r/7yLHPr/1 Ciao. Giuseppe 
Hi @DaveBunn , you have the same problem taking events from a data source: if you don't have events, you don't have any count in the stats. At first you have to define what you want to monitor: hos... See more...
Hi @DaveBunn , you have the same problem taking events from a data source: if you don't have events, you don't have any count in the stats. At first you have to define what you want to monitor: hosts, indexes, sourcetypes. Then, defined what to monitor (e.g. sourcetypes), you have to create anothe lookup (called e.g. perimeter.csv) containing all the values of the field to monitor at least in one column (e.g. sourcetype). then you could run something like this: | inputlookup TA_feeds.csv ! stats count BY sourcetype | append [ | inputlookup perimeter.csv | eval count=0 | fields sourcetype count ] | stats sum(count) AS total BY sourcetype | eval status=if(total=0,"Not Present","Present") | table sourcetype status In this way, you have a table containing all the sourcetypes to monitor and the information if there are rows for each sourcetype or not. Ciao. Giuseppe
Hi @QuantumRgw , so you have to install an UF on your pcs and manage them using a Deployment Server ad descripted at https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Introducti... See more...
Hi @QuantumRgw , so you have to install an UF on your pcs and manage them using a Deployment Server ad descripted at https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Introduction_to_the_Splunk_Distributed_Deployment_Server_(SDDS)#:~:text=The%20Splunk%20deployment%20server%20is,Enterprise%20and%20Universal%20Forwarder%20instances.  Then you have to define your perimeter, in terms of hosts to monitor and, for each host, which logs to index. Than having the above information, you have to define your Use Cases, e.g. monitoring of administrator accesses, presence of not updated patches, presence of malicious known packets, etc... Ciao. Giuseppe
Hi, Could anyone pls help me to conver this Blacklist to xml regex ? blacklist1 = EventCode="4662" Message="Object Type:(?!\s*(groupPolicyContainer|computer|user))" Thanks..  
I have a lookup file called TA_feeds.csv with six columns labeled below with multiple rows similar to the one below. index | sourcetype | source | period | App | Input Azure | mscs:Azure:VirtualMa... See more...
I have a lookup file called TA_feeds.csv with six columns labeled below with multiple rows similar to the one below. index | sourcetype | source | period | App | Input Azure | mscs:Azure:VirtualMachines | /subscription/1111-2222-3333-4444/* | 42300 | SPlunk_Cloud | AZ_VM_Feeds AD | Azure:Signin | main_tenant | 360 | Azure_App | AD_SignIn   I use the SPL [| inputlookup TA_feeds.csv | eval earliest=0-period."s" | fields index sourcetype source earliest | format] | stats count by index sourcetype source which iterates through the the lookup, and searches the relevant indexes for the data  one row at a time and generates a count for each input type. The problem is - if a row in the lookup does not generate any data - then there is not an entry in the stats. What I need is to be able to show if a feed is zero -i.e. | search count=0 But can't figure out how to generate the zero entries    
Using the mvexpand command twice breaks any association between the values.  Instead, combine the fields, use mvexpand, then break them apart again. | eval logMsgTimestampInit = logMsgTimestamp | ev... See more...
Using the mvexpand command twice breaks any association between the values.  Instead, combine the fields, use mvexpand, then break them apart again. | eval logMsgTimestampInit = logMsgTimestamp | eval ID_SERVICE= mvappend(ID_SERVICE_1,ID_SERVICE_2) , TYPE= mvappend(TYPE1,TYPE2) | eval pair = mvzip(ID_SERVICE, TYPE) | mvexpand pair | eval ID_SERVICE = mvindex(pair,0), TYPE = mvindex(pair, 1) | table ID_SERVICE TYPE  
Thanks,  It does tho I'm going to have to take a closer look at the reports.  They at least appear to have the same owner nobody and run as Owner.  Also the Read/Write permissions are the same.    
The user context is the name of account under which the job runs.  In most cases, it's the name of user running the search, but some scheduled searches can be set to run as the owner.  In the specifi... See more...
The user context is the name of account under which the job runs.  In most cases, it's the name of user running the search, but some scheduled searches can be set to run as the owner.  In the specific case of user context = splunk-system-user, that is the name used when a search has no owner (owned by "nobody").
Hi @aditsss  Please check this:   | makeresults | eval _raw="[AssociationRemoteProcessor] Exception while running association: javax" | rex field=_raw "\]\s(?<rexField>.*)\:" | table _raw rexField... See more...
Hi @aditsss  Please check this:   | makeresults | eval _raw="[AssociationRemoteProcessor] Exception while running association: javax" | rex field=_raw "\]\s(?<rexField>.*)\:" | table _raw rexField   this rex produces this output: _raw rexField [AssociationRemoteProcessor] Exception while running association: javax Exception while running association
It would help to know what you've tried so far, but perhaps this will help. | rex "] (?<field>.*?):"
Hi isoutamo, sorry for the dumb question, but I have to put only MN in maintenance mode or also the other nodes (except SH)? Do I have also to stop Splunk manually or it is automatically stopped du... See more...
Hi isoutamo, sorry for the dumb question, but I have to put only MN in maintenance mode or also the other nodes (except SH)? Do I have also to stop Splunk manually or it is automatically stopped during the OS shutdown?   Thank you, Andrea  
        | eval logMsgTimestampInit = logMsgTimestamp | eval ID_SERVICE= mvappend(ID_SERVICE_1,ID_SERVICE_2) , TYPE= mvappend(TYPE1,TYPE2) | table ID_SERVICE TYPE         ID_SERVICE TYPE ... See more...
        | eval logMsgTimestampInit = logMsgTimestamp | eval ID_SERVICE= mvappend(ID_SERVICE_1,ID_SERVICE_2) , TYPE= mvappend(TYPE1,TYPE2) | table ID_SERVICE TYPE         ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00 afg567 hydraulic_433         cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Hi Dears, I have the following issue, exists some cells (like in red) that is appearing with 02 values  per cell, like the column ID_SERVICE, this is why the payload is containing 02 service id in the same message.   What I need? I need split this cells everytime it occurs , I tried to use mvexpand but unfortunately it causes a mess in the table. When I try to use mvexpand it duplicates the rows and for each value in the first colum creates  another row         ... | query_search | mvexpand ID_SERVICE | mvexpand TYPE | table ID_SERVICE TYPE TIME         ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00 asd232 hydraulic_433 2023-12-01 08:45:00        afg567 mechanic_234 2023-12-01 08:45:00 afg567 hydraulic_433 2023-12-01 08:45:00        cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Since the 01 row (in red) shares the same timestamp (TIME colum) I would like to split every value in a row and copy the same timestamp for both values and the desired output is like follows below: ID_SERVICE TYPE TIME asd232 mechanic_234 2023-12-01 08:45:00       afg567 hydraulic_433 2023-12-01 08:45:00       cvf455 hydraulic_787 2023-12-01 08:41:00       bjf347 mechanic_343 2023-12-01 08:40:00   Please, help me.
][ERROR][pub-#32738][AssociationRemoteProcessor] Exception while running association: javax.cache.CacheException: class org.apache.ignite.IgniteInterrup [2023-11-09T06:06:02,015][ERROR][pub-#19230][... See more...
][ERROR][pub-#32738][AssociationRemoteProcessor] Exception while running association: javax.cache.CacheException: class org.apache.ignite.IgniteInterrup [2023-11-09T06:06:02,015][ERROR][pub-#19230][FedPledgingFlaggingRemoteProcessor] No rejection criteria found for the specified key: CO. Hi , Can anyone guide me how to extract the highlighted text.
I have a few scheduled jobs running from an TA.  Multiple ones have | collect index=summary at the end of the SPL.  For some of them when they run I get 0 results with a warning "no results to summar... See more...
I have a few scheduled jobs running from an TA.  Multiple ones have | collect index=summary at the end of the SPL.  For some of them when they run I get 0 results with a warning "no results to summary index".  I reran the job manually and can see the results.  I can see there's a macro error in the job that did not have any results but the other job that ran has very similar SPL and works fine. When I looked at search.log the one thing that stood out is for the one that ran with results. This was in the log user context: Splunk-system-user The job that did not return results did not have "user context: Splunk-system-user" my question is what sets the user context and what overrides it (if possible) to see if this is the cause of my problems. thanks
Hi Usually I did it by one layer at time sh, IDX etc. on SH layer there is usually no need to set nodes first in detention and then reboot, but you need to do it as your splunk usage needs it. Also... See more...
Hi Usually I did it by one layer at time sh, IDX etc. on SH layer there is usually no need to set nodes first in detention and then reboot, but you need to do it as your splunk usage needs it. Also if indexers restart quickly max. couple of minutes then just extend (if needed) timeouts for detecting nodes downtime to avoid unnecessary switching of primaries to another node. Of course it’s good to put MN to maintenance mode before you restart each node one by one when it’s needing reboot. Usually I keep splunk up and running until it’s time for reboot. After all OSs have updated and restarted then disable MN’s maintenance mode and wait that needed repair actions has done. r. Ismo
Hi @Justin.Pienaar, Thanks for following up and sharing the solution! I love to see it! 
From the Installation Manual Upgrading Splunk Enterprise directly to version 9.0 is only supported from versions 8.1.x and higher. Upgrading a universal forwarder directly to version 9.0 is supporte... See more...
From the Installation Manual Upgrading Splunk Enterprise directly to version 9.0 is only supported from versions 8.1.x and higher. Upgrading a universal forwarder directly to version 9.0 is supported from versions 8.1.x and higher. See https://docs.splunk.com/Documentation/Splunk/9.0.1/Installation/AboutupgradingREADTHISFIRST