All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unl... See more...
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unlike the universal forwarder, the heavy forwarder does part of the indexing himself. So what exactly does it index? does he segment the raw data to the tsidx file and sends them both to the indexer?
Yup, that would work too. Sysmon - Configuration Files  Sysmon - Event Filtering 
Thanks a lot @gcusello ! I just created a search to create that CSV used in your query. | ldapsearch domain=default search="(objectClass=computer)" | table name | rename name as host | outputloo... See more...
Thanks a lot @gcusello ! I just created a search to create that CSV used in your query. | ldapsearch domain=default search="(objectClass=computer)" | table name | rename name as host | outputlookup append=false monitored_hosts.csv and I run your query using the monitored_hosts.csv. It works flawless! thanks once again.  
Why use blacklists at all?  Sysmon has excellent means of filtering what you want logged and not. 
Can you post the output of this command? (replace with your trial stack's name).  openssl s_client -connect prd-p-xxxxx.splunkcloud.com:8088  I suspect the cert you'll see returned is from the Splu... See more...
Can you post the output of this command? (replace with your trial stack's name).  openssl s_client -connect prd-p-xxxxx.splunkcloud.com:8088  I suspect the cert you'll see returned is from the Splunk internal CA, and that the Splunk Cloud trials are not set up with a signed cert on port 8089. On a production/paid Splunk Cloud stack you'd send logs to https://http-inputs-<stack_name> .splunkcloud.com on port 443 and I've never seen an issue with certificate validation in those environments (it uses the same cert as the web interface). 
I'm experimenting with doing ETW logging of Microsoft IIS, where the IIS log ends up as XML in a windows eventlog. But I have problems getting Splunk to use the correct timestamp field, Splunk uses ... See more...
I'm experimenting with doing ETW logging of Microsoft IIS, where the IIS log ends up as XML in a windows eventlog. But I have problems getting Splunk to use the correct timestamp field, Splunk uses the TimeCreated property for eventtime (_time), and not the date and time properties that indicate when IIS served the actual webpage. An example: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-IIS-Logging' Guid='{7e8ad27f-b271-4ea2-a783-a47bde29143b}'/><EventID>6200</EventID><Version>0</Version><Level>4</Level><Task>0</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime='2024-04-04T12:23:43.811459900Z'/><EventRecordID>11148</EventRecordID><Correlation/><Execution ProcessID='1892' ThreadID='3044'/><Channel>Microsoft-IIS-Logging/Logs</Channel><Computer>sw2iisxft</Computer><Security UserID='S-1-5-18'/></System><EventData><Data Name='EnabledFieldsFlags'>2149961727</Data><Data Name='date'>2024-04-04</Data><Data Name='time'>12:23:37</Data><Data Name='cs-username'>ER\4dy</Data><Data Name='s-sitename'>W3SVC5</Data><Data Name='s-computername'>sw2if</Data><Data Name='s-ip'>192.168.32.86</Data><Data Name='cs-method'>GET</Data><Data Name='cs-uri-stem'>/</Data><Data Name='cs-uri-query'>blockid=2&amp;roleid=8&amp;logid=21</Data><Data Name='sc-status'>200</Data><Data Name='sc-win32-status'>0</Data><Data Name='sc-bytes'>39600</Data><Data Name='cs-bytes'>984</Data><Data Name='time-taken'>37</Data><Data Name='s-port'>443</Data><Data Name='csUser-Agent'>Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/123.0.0.0+Safari/537.36+Edg/123.0.0.0</Data><Data Name='csCookie'>-</Data><Data Name='csReferer'>https://tsidologg/?blockid=2&amp;roleid=8</Data><Data Name='cs-version'>-</Data><Data Name='cs-host'>-</Data><Data Name='sc-substatus'>0</Data><Data Name='CustomFields'>X-Forwarded-For - Content-Type - https on host tsidologg</Data></EventData></Event> I've tried every combination in props.conf that I can think of This should work, but doesen't..   TIME_PREFIX = <Data Name='date'> MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT =<Data Name='date'>%Y-%m-%d</Data><Data Name='time'>%H:%M:%S</Data> TZ = UTC   Any ideas?
Hi @bhaskar5428 , You need to change the regex capture group to cover only time, like below; | rex field=_raw "\"@timestamp\":\"\d{4}-\d{2}-\d{2}[T](?<Time>\d{2}:\d{2})"  
I believe it was "https://jdk.java.net"  Is there a certain package of OpenJDK or OracleJDK I should be using.
I have a situation with ingestion latency that I am trying to fix. The heavy forwarder is set to Central Standard Time in the front end under preferences. Does the front end setting have any bearing... See more...
I have a situation with ingestion latency that I am trying to fix. The heavy forwarder is set to Central Standard Time in the front end under preferences. Does the front end setting have any bearing on the props.conf 
Recent versions of the Splunk Universal Forwarder install with a least privileged user that doesn't have the ability to pull Windows event logs by default.  This is different from older versions of t... See more...
Recent versions of the Splunk Universal Forwarder install with a least privileged user that doesn't have the ability to pull Windows event logs by default.  This is different from older versions of the UF which worked differently and could pull all logs by default.   I'm not familiar with what you can do with Intune, but do you have the ability to specify command line arguments to run with the deployment? I'm thinking you may need to add something like WINEVENTLOG_SEC_ENABLE=1 (or similar).  See https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller#Supported_command_line_flags for details along with other supported options. 
I am trying to instrument a Xamarin Forms app with the following configuration:. var config = AppDynamics.Agent.AgentConfiguration.Create("<EUM_APP_KEY>"); config.CollectorURL = "http://<IP_Address... See more...
I am trying to instrument a Xamarin Forms app with the following configuration:. var config = AppDynamics.Agent.AgentConfiguration.Create("<EUM_APP_KEY>"); config.CollectorURL = "http://<IP_Address>:7001"; config.CrashReportingEnabled = true; config.ScreenshotsEnabled = true; config.LoggingLevel = AppDynamics.Agent.LoggingLevel.All; AppDynamics.Agent.Instrumentation.EnableAggregateExceptionHandling = true; AppDynamics.Agent.Instrumentation.InitWithConfiguration(config); In the documentation, it is stated that: The Xamarin Agent does not support automatic instrumentation for network requests made with any library. You will need to manually instrument HTTP network requests regardless of what library is used. Mobile RUM Supported Environments (appdynamics.com) Yet, in these links Customize the Xamarin Instrumentation (appdynamics.com) and Instrument Xamarin Applications (appdynamics.com) I see that I can use AppDynamics.Agent.AutoInstrument.Fody package to automatically instrument the network requests. I tried to follow the approach of automatic instrumentation using both AppDynamics.Agent and AppDynamics.Agent.AutoInstrument.Fody packages as per the docs, but I get the below error when building the project: MSBUILD : error : Fody: An unhandled exception occurred: MSBUILD : error : Exception: MSBUILD : error : Failed to execute weaver /Users/username/.nuget/packages/appdynamics.agent.autoinstrument.fody/2023.12.0/build/../weaver/AppDynamics.Agent.AutoInstrument.Fody.dll MSBUILD : error : Type: MSBUILD : error : System.Exception MSBUILD : error : StackTrace: MSBUILD : error : at InnerWeaver.ExecuteWeavers () [0x0015a] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:222 MSBUILD : error : at InnerWeaver.Execute () [0x000fe] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:112 MSBUILD : error : Source: MSBUILD : error : FodyIsolated MSBUILD : error : TargetSite: MSBUILD : error : Void ExecuteWeavers() MSBUILD : error : Sequence contains more than one matching element MSBUILD : error : Type: MSBUILD : error : System.InvalidOperationException MSBUILD : error : StackTrace: MSBUILD : error : at System.Linq.Enumerable.Single[TSource] (System.Collections.Generic.IEnumerable`1[T] source, System.Func`2[T,TResult] predicate) [0x00045] in /Users/builder/jenkins/workspace/build-package-osx-mono/2020-02/external/bockbuild/builds/mono-x64/external/corefx/src/System.Linq/src/System/Linq/Single.cs:71 MSBUILD : error : at AppDynamics.Agent.AutoInstrument.Fody.ImportedReferences.LoadTypes () [0x000f1] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ImportedReferences.cs:117 MSBUILD : error : at AppDynamics.Agent.AutoInstrument.Fody.ImportedReferences..ctor (ModuleWeaver weaver) [0x0000d] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ImportedReferences.cs:90 MSBUILD : error : at ModuleWeaver.Execute () [0x00000] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ModuleWeaver.cs:17 MSBUILD : error : at InnerWeaver.ExecuteWeavers () [0x000b7] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:204 MSBUILD : error : Source: MSBUILD : error : System.Core MSBUILD : error : TargetSite: MSBUILD : error : Mono.Cecil.MethodDefinition Single[MethodDefinition](System.Collections.Generic.IEnumerable`1[Mono.Cecil.MethodDefinition], System.Func`2[Mono.Cecil.MethodDefinition,System.Boolean]) Any help please? Thank you
Perfect. Thank you!
Hi @corti77, if you have a list of monitored hosts you could run a search like the following: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields hos... See more...
Hi @corti77, if you have a list of monitored hosts you could run a search like the following: | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 if you don't have this list, you can identify the hosts that sent logs in the last 30 days and not in the last hour: | tstats count latest(_time) AS _time WHERE index=* BY host | eval period=if(_time>now()-3600,"last_Hour","Previous") | stats dc(period) AS period_count values(period) AS period BY host | where period_count=1 AND period="Previous" Ciao. Giuseppe
Is renderXml = 1 set on the input for working and non-working? And any chance what you have working locally includes delimiters around the regex?  Ex...  blacklist1 = $XmlRegex="(C:\\Program File... See more...
Is renderXml = 1 set on the input for working and non-working? And any chance what you have working locally includes delimiters around the regex?  Ex...  blacklist1 = $XmlRegex="(C:\\Program Files\\Preton\\PretonSaver\\PretonService.exe)" inputs.conf - Splunk Documentation * key=regex format: [...] * The regex consists of a leading delimiter, the regex expression, and a trailing delimiter. Examples: %regex%, *regex*, "regex" [...]  
Hello @Nimi1 , Below 2 points might be the reason for duplicate events ingested in Splunk - 1 - It might be possible that if you have multiple forwarders and you've enabled the same input (like E... See more...
Hello @Nimi1 , Below 2 points might be the reason for duplicate events ingested in Splunk - 1 - It might be possible that if you have multiple forwarders and you've enabled the same input (like ELB access logs) on all of them, it could lead to duplicate events being sent to Splunk. This duplication may occur because each forwarder is independently sending the same logs, resulting in repeated entries in your Splunk data. 2 - By mistake If you've enabled multiple inputs for the same source within Splunk, like ELB access logs, it could result in the same logs being ingested multiple times, leading to duplicates in your Splunk data. If this reply helps you, Karma would be appreciated.
The timeframe field is not terminated by a pipe, like the other fields, so the regex doesn't match.  Try this: | rex "(\w+)=([^\|\"]+)"
Hi, By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type o... See more...
Hi, By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type of "issue" in the future, I would like to have some alerts in splunk informing me whenever any host, initially domain joined workstations and servers, might have stopped reporting events. Did someone implement something like this already? any good article to follow? I plan to create a lookup table using ldapsearch and then, an alert detecting which hosts from that table are not present in a basic listing host search on sysmon index for example. Any other better or simpler approach?   many thanks
OK so the next time period after it was indexed would be 2024-04-04 01:00 to 2024-04-04 01:04:59 which doesn't include 04/04/2024 00:56:08.600 which is why your alert didn't get triggered.
Ok, thank you.   In one of the cases that I didn't get my alert triggered,  TimeIndexed = 2024-04-04 01:01:59 _time=04/04/2024 00:56:08.600
The simple answer is no - once the event is there, it is there until it expires i.e. passes the retention period, or it is deleted (which is a complex and risky operation). A more complex answer is ... See more...
The simple answer is no - once the event is there, it is there until it expires i.e. passes the retention period, or it is deleted (which is a complex and risky operation). A more complex answer is yes, but it involves the delete command which has its challenges and is restricted by role, i.e. only those who have been assigned the role can delete events, and even then, there aren't actually deleted, they are just made invisible to searches. A pragmatic answer (as you hinted at) is that you deduplicate or merge the events with the same event_id through your searches, which, as you point out, has its own challenges.