All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Below is the splunk query,  (My.Message has many various types of messages but the below one is what I wanted) index="myIndex" app_name="myappName"  My.Message = "*symbolName:*"  When I run the... See more...
Below is the splunk query,  (My.Message has many various types of messages but the below one is what I wanted) index="myIndex" app_name="myappName"  My.Message = "*symbolName:*"  When I run the above query, I get the below results: myappstatus got Created, symbolName: AAPL ElapsedTime: 0.0002009 myappstatus got Ended, symbolName: GOOGL ElapsedTime: 0.0005339 myappstatus got Created, symbolName: AAPL ElapsedTime: 0.0005339 Please help on the following:  1) How to get the Total count of the query (Visualization) only for My.Message = "*symbolName:*"  2) How to split the string "myappstatus got Created, symbolName: AAPL ElapsedTime: 0.0002009"  3) How to create a table for "symbolName", "Total Count", "ElapsedTime" (for example, symbolName: AAPL, Total Count = 2 and ElapsedTime = 0.0007348 (0.0002009 + 0.0005339) Thanks
Hi,  I am trying to use a lookup to whitelist/exclude some values from search results such as process_name. But whenever I run a search with just the lookup to verify the possible exclusion, ever... See more...
Hi,  I am trying to use a lookup to whitelist/exclude some values from search results such as process_name. But whenever I run a search with just the lookup to verify the possible exclusion, every instance of backlash shows as double backlash.  Like this:  Search to verify whitelist/exclusion: | inputlookup whitelist | fields  whitelisted_process_name  | rename whitelisted_process_name as process_name | format Search results: (process_name= C:\\...\\...\\...\\...\\... What is actually on the lookup: process_name= C:\...\...\...\...\... I have tried replacing the double backlash with a single one using  the following command and several other variations. But does not seen to be working.  ```| eval process_name=replace(process_name,"\\\\(.)","\")``` ```| eval process_name=replace(process_name,"\\\\(.)","\1")``` ```| eval process_name=replace (process_name,"(\\\\)","\")``` ``` | replace "*\\\\*" WITH "\" IN process_name  ``` Any help will be much appreciated.    Thanks. 
First query:   index="raw_es2" app message="[Login][Password]Login simplified active." | stats count by message | rename count as "Calls" | table Calls   Second Query:   index="raw_es2" app mes... See more...
First query:   index="raw_es2" app message="[Login][Password]Login simplified active." | stats count by message | rename count as "Calls" | table Calls   Second Query:   index="raw_es2" app message="[Login][Password]Login simplified active." | table flowId | join type=inner flowId [ search index"raw_es2" app message="[Login][Password] finished." ] | stats count by message | rename count as "Success" | table Success    Third Query:   index="raw_es2" app message="[Login][Password]Login simplified active." | table flowId | join type=inner flowId [ search index"raw_es2" app message="[Login][Password] finished with error." ] | stats count by message | rename count as "Errors" | table Errors     Is that possible I put the result of each query in a eval to calculate the percentage of error and success? Something like:   | stats sum(calls), sum(success), sum(errors) | eval error percentage = round(success*100/Calls) | eval success percentage = round(errors*100/Calls) | eval "error percentage" > 0     The gold would be to have an answer like this all at once:    Calls: Success: Errors: PercentualErrors: PercentualSuccess: 299 285 14 4.68 95.32    
  Hi this is the log {"time":"2023-06-13 20:35:02.046 +00:00", "level":"Information", "client":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.12.1.0S... See more...
  Hi this is the log {"time":"2023-06-13 20:35:02.046 +00:00", "level":"Information", "client":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.12.1.0Safari/537.36 Edg/xx.x.xx.x.x", "environment":"deduction", "user":"CORP\NBSWWUK", "clientIp":"xxx.xxx.xxx.xx", "processId":"24560", "processName":"w3wp", "machine":"mymachine", "version":"", "message":"", "log":"", "requestURL":"/request/v1/Application/getorganizations", "exception":"", "requestBody":"", "requestParam":"", "exceptionStack":""}
Hello, Does anyone know if there is a Splunk App or Add-on for Imperva DBF logs? We are currently receiving logs in Json format, but it doesn't look pretty good. Or maybe, can we use the Imperva p... See more...
Hello, Does anyone know if there is a Splunk App or Add-on for Imperva DBF logs? We are currently receiving logs in Json format, but it doesn't look pretty good. Or maybe, can we use the Imperva plugin for WAF logs instead? It will work? Someone has tried? If so, can someone recommend some general configuration?
Hi, I'm kind of new to Splunk and I was wondering if someone could help on this: What I'm trying to do is a timechart that counts by month, the number of hosts that had 3 or more lastlogons.   ... See more...
Hi, I'm kind of new to Splunk and I was wondering if someone could help on this: What I'm trying to do is a timechart that counts by month, the number of hosts that had 3 or more lastlogons.   So far this is what I have: index="assets" sourcetype="ldap:devices" | stats values(lastLogonTimestamp) as "LastLogon" by host | eval LastLogon_Count = mvcount(LastLogon) host LastLogon LastLogon_Count Host1 2023-06-10T14:05:35.849017Z 1 Host2 2023-06-10T16:24:01.290211Z 1 Host3 2023-03-12T01:30:39.853238Z 2023-03-22T12:01:18.877600Z 2023-04-01T14:05:33.812544Z 2023-04-11T15:34:16.462356Z 2023-04-24T11:50:29.265116Z 2023-05-04T12:34:50.229455Z 2023-05-14T16:16:22.161436Z 2023-05-29T00:57:30.342080Z 8 Host4 2023-06-10T16:23:14.783142Z 1 Host5 2023-06-10T14:05:51.345719Z 1 Host6 2023-05-11T14:52:26.019471Z 2023-05-21T21:22:27.404659Z 2023-05-31T22:02:28.210643Z 2023-06-12T00:59:03.121092Z 4 Host7 2023-05-11T14:46:42.864582Z 2023-05-21T18:02:34.820364Z 2023-05-31T22:13:17.107118Z 2023-06-11T00:32:24.358015Z 4 Host8 2023-06-10T14:05:04.812651Z 1 Host9 2023-06-10T14:05:20.315748Z 1 Host10 2023-06-10T14:06:37.952136Z 1   From this results I want to count on a timechart the hosts that had 3 or more lastlogon on the LastLogon_Count field. So let's say here the count should only be 3 (Host3,Host6,Host7) I tried doing this, but got no results: index="assets" sourcetype="ldap:devices" | stats values(lastLogonTimestamp) as "LastLogon" by host | eval LastLogon_Count = mvcount(LastLogon) | timechart span=1mon count(eval(if(LastLogon_Count >= 3, 1,0))) by host
We have to update the certificates for secure communication between UF, HF and indexer. The way to prepare a combined cert is a bit confusing. I have updated an output.conf file for forwarder with ... See more...
We have to update the certificates for secure communication between UF, HF and indexer. The way to prepare a combined cert is a bit confusing. I have updated an output.conf file for forwarder with a combined cert in sslCertPath and Root certificate in sslRootCAPath. [tcpout-server://xxxx] sslCerthPath = xxx/xxx/combined_cert.pem sslRootCAPath = xxx/xxx/rootcert.pem sslPassword = xxxx   Is this the right way to do it?   the combined cert file consists of server cert, encrypted key and root cert.
Hello Splunk Community, I am having some difficulty getting Windows event log filters to work properly. Whatever I have specified in the inputs.conf of Splunk_TA_windows is being ignored, I can te... See more...
Hello Splunk Community, I am having some difficulty getting Windows event log filters to work properly. Whatever I have specified in the inputs.conf of Splunk_TA_windows is being ignored, I can tell because there are significant volumes of events present that are not in the whitelist stanzas. I have even tried blocklisting very large numbers of these unwanted event codes explicitly (in blacklist1) without success. I can see the app successfully deploy to my clients in internal logs when I push changes to the server class or add-on, and those that I have verified have these exact stanza settings on them are still sending event logs that are not on the whitelist or are explicitly blocklisted. I am using 8.6.0 of the Windows add-on and UFs on 8x and 9x versions (hundreds). To make matters worse, ingest actions are also not working properly, I have tried adding what is effectively the same regex as an the ingest action (blacklist1 on Splunk Cloud), but am still seeing these unwanted codes. Is anyone else having this problem, or am I doing something wrong? Also, I have the format as Xml (as I understand it XML logs are typically smaller in size then their non-XML counterparts. Does this mean in the advanced format filters I need to use "$XmlRegex=" or can I still use the EventCode=" regex? [WinEventLog://Security] renderXml = true whitelist1=EventCode="(4740|644|104|1100|4624|528|4625|529|4776|680|681|4720|624|4732|636|4728|632|4756|660|4771|675|4730|634|4734|638|4758|662|1102|517|4722|626|4726|630|4768|672|676|4769|673|4765|4766|5145)" blacklist1=$XmlRegex="EventID>(53506|53504|40970|40962|40961|36928|... (this blocklist goes on and on)
Trying to install splunk on ubuntu instance within e3, I've partitioned and formatted the drive and every step works fine but whenever I get to the last step I keep running into this issue. How can I... See more...
Trying to install splunk on ubuntu instance within e3, I've partitioned and formatted the drive and every step works fine but whenever I get to the last step I keep running into this issue. How can I solve it?
I am developing a Splunk SOAR app that retrieve a json from our backend and ingest it into a container in splunk soar. However, I need to show some fields that are not included in the container schem... See more...
I am developing a Splunk SOAR app that retrieve a json from our backend and ingest it into a container in splunk soar. However, I need to show some fields that are not included in the container schema and i want those custom fields to be deployed with my app. Therefore my question, Is it possible to add custom fields to a splunk phantom container schema programmatically so our customers do not need to create them manually in the Splunk SOAR user interface?
Hi, currently I have scheduled alerts that are triggered based on file count results. If count of 'file x' for that day is 0, I send out an email alert for the missing file. However, I am having inac... See more...
Hi, currently I have scheduled alerts that are triggered based on file count results. If count of 'file x' for that day is 0, I send out an email alert for the missing file. However, I am having inaccurate alert messages whenever Splunk is not ingesting the logs from the target server. Instead of 'no logs ingested', it shows as if the file was missing. I currently have 2 separate saved searches. One is to check if the index is receiving logs, and next if for the file monitoring. How do I integrate these two so that the second one will not run when the first one comes back with 0 events? Thanks!
Hey all - thanks in advance! I have _raw log data that contains a header section and then what appears to be two entries within itself. I want to split these entries (they are formatted the same, ex... See more...
Hey all - thanks in advance! I have _raw log data that contains a header section and then what appears to be two entries within itself. I want to split these entries (they are formatted the same, except the latter appends a '1' onto each fieldname) and then create two events from this one event, like so: Before _raw = HEADER|PART1|PART2 After event1 = HEADER|PART1 event2 = HEADER|PART2   An event will come from the same IP and device name; the parts are paths and simple fields. Here is a sample log (bracketed to show how I want it split, but these brackets are not in the raw data): [Jun 12 23:00:09 server.i.j WRD: 0|AName|Named Application Server|1|0|Rule|0|ClientTime=7:02:28-PM CompName=sparse.info.given.here IPv4=x.x.x.x] [Path=No-Results-Found MD5= Size= Modified= RuleID= ValidHits= InvalidHits= NoValidationHits=] [Path1=No-Results-Found MD51= Size1= Modified1= RuleID1= ValidHits1= InvalidHits1= NoValidationHits1= Count=1] I would like the final results to be: Jun 12 23:00:09 server.i.j WRD: 0|AName|Named Application Server|1|0|Rule|0|ClientTime=7:02:28-PM CompName=sparse.info.given.here IPv4=x.x.x.x Path=No-Results-Found MD5= Size= Modified= RuleID= ValidHits= InvalidHits= NoValidationHits= Jun 12 23:00:09 server.i.j WRD: 0|AName|Named Application Server|1|0|Rule|0|ClientTime=7:02:28-PM CompName=sparse.info.given.here IPv4=x.x.x.x Path1=No-Results-Found MD51= Size1= Modified1= RuleID1= ValidHits1= InvalidHits1= NoValidationHits1= Count=1 Count is not really a big deal here, it can be on either log (the latter by default as it is the final field in the log) I have the regex to perform the part-splitting if rex is the move here: | rex field=_raw "(?<header>.*IPv4Address=\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}) (?<part1>Path.*) (?<part2>Path.*)" Once recombined, I will still perform manipulation on the resulting logs, and I do not need to write to file or CSV. The issue this is causing relates to finding accurate hits on files (the ValidHits1 field is annoying; same with Path1). I can happily rename fields after rejoining my Parts to the header so I can then correlate on top of all data with common field names. Please feel free to ask for more information to help me out with this, and I appreciate any help you can give for this project!
Hi Splunkers, we have an environment where for Windows Logs collection it is higly desiderated to avoid agent installation. After some searches, we found that the proper way is to use WMI. Now, what... See more...
Hi Splunkers, we have an environment where for Windows Logs collection it is higly desiderated to avoid agent installation. After some searches, we found that the proper way is to use WMI. Now, what is not completely clear for us is the list of permission required for a Domain User; what we need is a short but esaustive list of permission for it. Untile now, we know that this user must: Be a domain user Belong to Event Reader group Have access to system where to collect logs Be able to read at minimum System, Application and Security logs in case of  AD DCs Be the user that execute the splunkd daemon for forwarder (HF or U, in our case HF) that must collect data with WMI Is there any other requirements we are missing?  
How do I perform lookup multiple field but append the missing value.   Thanks For example: Table A: Name        Role              IP PaloAlto   Firewall       192.168.1.1 Cisco         Router   ... See more...
How do I perform lookup multiple field but append the missing value.   Thanks For example: Table A: Name        Role              IP PaloAlto   Firewall       192.168.1.1 Cisco         Router         192.168.1.2 Table B Name          Role           ConsoleIP PaloAlto     Firewall    192.168.10.1 Cisco           Router      192.168.10.2 Siemens     Switch      192.168.10.3 Table A lookup table B results: Name        Role              IP                                ConsoleIP PaloAlto   Firewall       192.168.1.1           192.168.10.1 Cisco         Router         192.168.1.2           192.168.10.2 Siemens     Switch      <empty>                 192.168.10.3
I have an index populated with JSON data.  One of the elements in the JSON data is an array of key/value pairs.  I want to create a dashboard that pulls all of the data in one search populating one t... See more...
I have an index populated with JSON data.  One of the elements in the JSON data is an array of key/value pairs.  I want to create a dashboard that pulls all of the data in one search populating one table with the non-array values.  When one of the rows in the table of non-array values is selected, I want to populate a second table with the array of key/value pairs parsed out into individual rows.  I have this sort of working by searching again back to the original index but would like to avoid this overhead since the key/value pairs were included in the initial results. Any ideas on how to pull this off? Example of one record: { "Header": { "EntryTimestamp": "2023-06-13T14:13:07.9316608Z", "EntryType": "Exception", "SubmittingUser": "TEST", "Message": "Call to Server end point failed. Unable to connect to the remote server.", "ApplicationName": "TestService", "Environment": "DEVELOPMENT" }, "Items": [{ "Key": "ExceptionTimeStamp", "Value": "2023-06-13T14:13:07.9266" }, { "Key": "ExceptionSource\\MachineName", "Value": "TESTSERVER" }, { "Key": "ExceptionSource\\ClassName", "Value": "Socket" }, { "Key": "ExceptionSource\\NameSpace", "Value": "System.Net.Sockets" }, { "Key": "ExceptionSource\\Method", "Value": "DoConnect" }, { "Key": "RootExceptionType", "Value": "System.Net.Sockets.SocketException" }, { "Key": "ExceptionID", "Value": "72c52379-433b-41a7-90aa-1900cfcc4c29" }, { "Key": "Timeout", "Value": "115000" }, { "Key": "Message", "Value": "Unable to connect to the remote server" } ], "Data": "" } The elements in the "Header" section would appear in one table.  When that row is selected, the array of "Items" would be populated in a second two-column table with the first column being Keys and the second column being the corresponding value that goes with the key. The approach I started with was putting the Items array in a hidden column in the first table, but am stuck on that approach with trying to figure out how to get that into an array and populate the second table.
Hi ,  I have a search query - | search Region = EMEA | eval Status=case(Statistic=0,"Green" , Statistic=2,"Red", Statistic=1,"Blue", 1==1, " " ) | appendpipe [ stats count | eval Status="Bla... See more...
Hi ,  I have a search query - | search Region = EMEA | eval Status=case(Statistic=0,"Green" , Statistic=2,"Red", Statistic=1,"Blue", 1==1, " " ) | appendpipe [ stats count | eval Status="Black" | where count=0 | fields - count] | stats latest(Status) The region has 7 SOD status data i.e. red and green. ,The issue is if one sod is in red state it is still showing green status. What I require is even if there is a single red status it has to be picked and not the green one , as I am using it in a dashboard.
I have used search query like this- | savedsearch REPORT1 |chart values(COLUMN3) AS Status BY COLUMN2 PROCESS_ID| fillnull value="_"|table COLUMN2 VAL1 VAL2 VAL3 VAL4 VAL5 VAL6 VAL7...... and I... See more...
I have used search query like this- | savedsearch REPORT1 |chart values(COLUMN3) AS Status BY COLUMN2 PROCESS_ID| fillnull value="_"|table COLUMN2 VAL1 VAL2 VAL3 VAL4 VAL5 VAL6 VAL7...... and I got result like below, here,the values are repeated within each cell(i.e;few cells are having multiple values say '_' & 'F') and few columns are null. Is there any way to display single accurate values within each cell of the table.
Hello community, I am having an issue creating appropriate SEDCMD to reduce the size of specific Win events. I am trying to extract only one random bit (could be anything) and through all the res... See more...
Hello community, I am having an issue creating appropriate SEDCMD to reduce the size of specific Win events. I am trying to extract only one random bit (could be anything) and through all the rest before they get indexed. Below is the raw Event and wanted to drop (it is large). I just want a single word/line. I did try the following but it did nothing. Under the Splunk_TA_Windows local/props I did put something like  [source::XmlWinEventLog:Security] SEDCMD-4688_splunkd_events_clearing=s/.\\Program Files\\.+\\splunkd\.exe//g  ----------------------------------------------- Raw Event--------------------------------------------------- <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid=' XXXXXXXX -4994-a5ba-3e3b0328c30d}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2023-06-13T10:39:41.797279900Z'/><EventRecordID>12536409</EventRecordID><Correlation/><Execution ProcessID='4' ThreadID='15216'/><Channel>Security</Channel><Computer> XXXXXXXX </Computer><Security/></System><EventData><Data Name='SubjectUserSid'>S-1-5-18</Data><Data Name='SubjectUserName'>XXXXXXXX</Data><Data Name='SubjectDomainName'> XXXXXXXX </Data><Data Name='SubjectLogonId'>0 XXXXXXXX 7</Data><Data Name='NewProcessId'>0x2734</Data><Data Name='NewProcessName'>C:\Program Files\SplunkUniversalForwarder\bin\splunk-MonitorNoHandle.exe</Data><Data Name='TokenElevationType'>%%1936</Data><Data Name='ProcessId'>0x17d4</Data><Data Name='CommandLine'></Data><Data Name='TargetUserSid'>S-1-0-0</Data><Data Name='TargetUserName'>-</Data><Data Name='TargetDomainName'>-</Data><Data Name='TargetLogonId'>0x0</Data><Data Name='ParentProcessName'>C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe</Data><Data Name='MandatoryLabel'>XXXXXXXX -16384</Data></EventData></Event> Caller_Domain = XXXXXXXX ller_User_Name = XXXXXXXX Channel = SecurityComputer = XXXXXXXX privEVentId = EventIDError_Code = -EventCode = 4688EventData_Xml = <Data Name='SubjectUserSid'> XXXXXXXX </Data><Data Name='SubjectUserName'> XXXXXXXX</Data><Data Name='SubjectDomainName'> XXXXXXXX </Data><Data Name='SubjectLogonId'>0x3e7</Data><Data Name='NewProcessId'>0x2734</Data><Data Name='NewProcessName'>C:\Program Files\SplunkUniversalForwarder\bin\splunk-MonitorNoHandle.exe</Data><Data Name='TokenElevationType'>%%1936</Data><Data Name='ProcessId'>0x17d4</Data><Data Name='CommandLine'></Data><Data Name='TargetUserSid'>S-1-0-0</Data><Data Name='TargetUserName'>-</Data><Data Name='TargetDomainName'>-</Data><Data Name='TargetLogonId'>0x0</Data><Data Name='ParentProcessName'>C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exe</Data><Data Name='MandatoryLabel'>S-1-16-16384</Data>EventID = 4688EventRecordID = 12 XXXXXXXXid = '{54849625-XXXXXXXX-a5ba-3e3b0328c30d}'Keywords = 0x8020000000000000Level = 0Logon_ID = 0x3e7MandatoryLabel = S-1-16-16384Name = 'Microsoft-Windows-Security-Auditing'NewProcessId = 0x2XXXXXXXX4NewProcessName = C:\Program Files\SplunkUniversalForwarder\bin\splunk-MonitorNoHandle.exeOpcode = 0ParentProcessName = C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exeProcessID = '4'ProcessId = 0x17d4RecordNumber = 12536409SubjectDomainName = XXXXXXXXbjectLogonId = 0x3e7SubjectUserName = XXXXXXXX= S-1-5-18SystemTime = XXXXXXXX:39:41.797279900Z'System_Props_Xml = <Provider Name='Microsoft-Windows-Security-Auditing' Guid='{XXXXXXXX4994-a5ba-3e3b0328c30d}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2023-06-13T10XXXXXXXX/><EventRecordID>12536409</EventRecordID><Correlation/><Execution ProcessID='4' ThreadID='15216'/><Channel>Security</Channel><Computer>XXXXXXXX</Computer><Security/>TargetDomainName = -TargetLogonId = 0x0TargetUserName = -TargetUserSid = S-1-0-0Target_Domain = -Target_User_Name = -Task = 13312ThreadID = '15216'TokenElevationType = %%1936Token_Elevation_Type = %%1936Token_Elevation_Type_id = 1936Version = 2action = allowedapp = win:unknowndest = XXXXXXXX= XXXXXXXXcaracal01.greenstream.privdvc_nt_host = XXXXXXXXevent_id = 12536409eventtype = endpoint_services_processes eventtype = windows_endpoint_processes process report eventtype = windows_event_signature track_event_signatures eventtype = windows_process_new execute process start eventtype = wineventlog_security os windows eventtype = wineventlog_windows os windows eventtype = winsec securityhost = XXXXXXXXid = 12536409index = XXXXXXXXserverlinecount = 1name = A new process has been creatednew_process = C:\Program Files\SplunkUniversalForwarder\bin\splunk-MonitorNoHandle.exenew_process_id = 0x2734new_process_name = splunk-MonitorNoHandle.exeparent_process = C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exeparent_process_id = 0x17d4parent_process_name = splunkd.exeparent_process_path = C:\Program Files\SplunkUniversalForwarder\bin\splunkd.exeprocess = C:\Program Files\SplunkUniversalForwarder\bin\splunk-MonitorNoHandle.exeprocess_exec = splunk-MonitorNoHandle.exeprocess_id = 0XXXXXXXX4process_name = splunk-MonitorNoHandle.exeprocess_path = C:\Program Files\SplunkUniversalForwarder\bin\splunk-MonitorNoHandle.exeproduct = Windowspunct = <_='://../////'><><_='---'_='{----}'/><></><></><>session_id = 0x3e7signature = A new process has been createdsignature_id = 4688source = XmlWinEventLog:Securitysourcetype = XmlWinEventLogsplunk_server = XXXXXXXX_nt_domain = XXXXXXXXsrc_user = XXXXXXXX$status = successsubject = A new process has been createdta_windows_action = failuretag = execute tag = os tag = process tag = report tag = security tag = start tag = track_event_signatures tag = windowsuser = XXXXXXXX$user_group = -vendor = Microsoftvendor_product = Microsoft Windows Any help is much appreciated. Thank you All!  
Hi, I am completely new to Splunk and I'm forwarding directly from FortiAnalyzer to Splunk on TCP1514. I have configured the FortiAnalyzer Remote Server Type = 'Syslog Pack'  the other options are '... See more...
Hi, I am completely new to Splunk and I'm forwarding directly from FortiAnalyzer to Splunk on TCP1514. I have configured the FortiAnalyzer Remote Server Type = 'Syslog Pack'  the other options are 'Syslog' 'FortiAnalyzer' and 'Common Event Format'. But using the other options I didn't appear to see data arriving on Splunk. I do see data arriving now, but other than the timestamp and the source IP of the FortiAnalyzer the data is not legible and looks like HEX. It appears that there is something wrong with Splunk interpreting the format of the data that is arriving from the FortiAnalyzer. I'd be extremely grateful if someone could advise on whether there is a particular 'Source Type' that I should be using on the TCP Data Input for logs arriving from the FortiAnalyzer. Source type 'syslog' seemed the most obvious to me, but that didn't resolve the issue. I have tried a few others and haven't had any success so far. Is anyone able advise on anything I may be doing incorrectly. Appreciate that it is not best practice to forward directly, but was hoping that this was possible, as we are trying to initially test Splunk with a view to potentially scaling up going forward. Many thanks in advance.  
Hi,   I have this props and transforms file to filter out the events before they reach the indexqueue. Not sure but the filter doesn't seem to work any more.  When I use btools to check the con... See more...
Hi,   I have this props and transforms file to filter out the events before they reach the indexqueue. Not sure but the filter doesn't seem to work any more.  When I use btools to check the configurations, I still notice the configuration in place. We recently upgraded Splunk version to 9.0.4 from 8.1.0 a month ago to check if this has some effects on the configurations but it doesn't have any  What could be the reason