All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You did remove the quotes in the second transform you posted Problem with your first regex, is that it hits both the one to remove and the one to keep. This may work:   NewProcessName.*?Teams\... See more...
You did remove the quotes in the second transform you posted Problem with your first regex, is that it hits both the one to remove and the one to keep. This may work:   NewProcessName.*?Teams\.exe<\/Data>.*?ParentProcessName   Looking for Teams.exe after NewProcessName and before ParentProcessNaneme Always test your regex, like this: https://regex101.com/r/v97Z1h/1 Edit: This may be faster, since it uses less steps to find the data:   NewProcessName[^<]+Teams\.exe<   Edit2 You can also set a sourcetype for the data you are trying to delete.  This way nothing are removed before you see that all is ok.  If sourcetype = ToDelete show correct data, then you can send it to nullQueue:   [4688cleanup] REGEX = NewProcessName[^<]+Teams\.exe< DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::ToDelete    
HI All,   We are trying to install Splunk through the chef script but the installation is stuck and timeout after 20 min. Command We ran as given below /opt/splunkforwarder/bin/splunk enable boot... See more...
HI All,   We are trying to install Splunk through the chef script but the installation is stuck and timeout after 20 min. Command We ran as given below /opt/splunkforwarder/bin/splunk enable boot-start --accept-license --no-prompt --answer-yes  When the Splunk installation script runs on the instance, it always hangs the first time, as the first screenshot. It will then work again if the command runs again in subsequent run. As shown in the 2nd screenshot. Note: After the 1st time it ran, the CPU went to 100%, and the Splunk process next existed. First Run:     Second Run:      
In the current project, we are sending application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The issue arises because we utilize ... See more...
In the current project, we are sending application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The issue arises because we utilize the cloudFrontID as a correlation ID to filter logs in Splunk, whereas SignalFx employs the traceId for log tracing. I am currently facing challenges in correlating the application logs' correlation ID with SignalFx's traceId. I attempted to address this issue by using the "Serilog.Enrichers.Span" NuGet package to log the TraceId and SpanId. However, no values were logged in Splunk. How can I access the TraceId generated by the OpenTelemetry Collector within the ASP.NET web application (Framework version: 4.7.2)? Let me know if further details are required from my end.
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false ... See more...
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false props.conf: [source::(...(usr/logs/Client*.log*))] sourcetype = auth_log My logs files pattern: Client_11.186.145.54:1_q1234567.log Client_11.186.145.54:1_q1234567.log.~~ Client_12.187.146.53:2_s1234567.log Client_12.187.146.53:2_s1234567.log.~~ Client_1.1.1.1:2_p1244567.log Client_1.1.1.1:2_p1244567.log.~~ In some of log files it starts with below line: ===== JLSLog: Maximum log file size is 5000000 and then log events So for this one i tried with below config one by one but nothing worked out adding crcSalt=<SOURCE> in monitor stanze, tried with adding SEDCMD in props.conf SEDCMD-removeheadersfooters=s/\=\=\=\=\=\sJLSLog:\s((Maximum\slog\sfile\ssize\sis\s\d+)|Initial\slog\slevel\sis\sLow)//g and tried with regex in transforms.conf transforms.conf [ignore_lines_starting_with_equals] REGEX = ^===(.*) DEST_KEY = queue FORMAT = nullQueue props.conf: [auth_log] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)=== TRANSFORMS-null = ignore_lines_starting_with_equals When i checked in splunkd logs there is no error captured and in list inputstatus it is showing                  percent = 100.00                 type = finished reading / open file please help me out of this issue if anyone faced before and fixed it. but the weird scenario is sometimes only  the first line of log file is indexed  ===== JLSLog: Maximum log file size is 5000000 host/server details: os: Solaris 10 splunk universal forwarder version 7.3.9 splunk enterprise version: 9.1.1 Here restriction is the host os cant be upgraded as of now so i need to strict on 7.3.9 splunk forwarder version.
how can i input zeek log and analyze data
Hello, I have problem in installing Python module on splunk i am getting pip not found error whenever i try to using pip module.. I am not sure what is wrong here.. Please someone help me fig... See more...
Hello, I have problem in installing Python module on splunk i am getting pip not found error whenever i try to using pip module.. I am not sure what is wrong here.. Please someone help me figure this out..
Thank you for it but i need one mail to be sent though a recipient has multiple rows of data.
Thanks for this! More tweaking required on my part as some of the subdomains being evaluated have more than 3 levels, but this is a big help in getting me on the right track!
The coalesce function selects a field within a single result.  To combine (aggregate) multiple results, use the stats command again after modifying the url field. index=proxy url=*.streaming-site.co... See more...
The coalesce function selects a field within a single result.  To combine (aggregate) multiple results, use the stats command again after modifying the url field. index=proxy url=*.streaming-site.com | eval megabytes=round(((bytes_in/1024)/1024),2) | stats sum(megabytes) as Download_MB by url | eval url=replace(url, ".*?\.(.*)","\1") | stats sum(Download_MB) as Download_MB by url | sort - Download_MB  
Thanks for the input. Escaping the escape characters seems a bit silly, but alright. I couldn't get it working today so I'll try a few more variations next week as I have time. Appreciate the help!
Did not realize that. Thank you for the correction. Removing quotes didn't exclude the Teams events though so I must have something else set wrong. As far as what I have posted, does it seem right? ... See more...
Did not realize that. Thank you for the correction. Removing quotes didn't exclude the Teams events though so I must have something else set wrong. As far as what I have posted, does it seem right? I'm not super familiar with troubleshooting props.conf and transforms.conf settings yet.
Hi All, I am looking into using some proxy logs to determine download volume for particular streaming sites and was looking for a way to merge hostnames into one "service". Consider the SPL: index... See more...
Hi All, I am looking into using some proxy logs to determine download volume for particular streaming sites and was looking for a way to merge hostnames into one "service". Consider the SPL: index=proxy url=*.streaming-site.com | eval megabytes=round(((bytes_in/1024)/1024),2) | stats sum(megabytes) as Download_MB by url | sort -Download_MB Will likely return multiple rows like: cdn1.streaming-site.com 180.3 cdn2.streaming-site.com 164.8 www.streaming-site.com  12.3 I am wanting to merge those all into one row of streaming-site.com   357.4 I have played around with the coalesce function, but this would be unsustainable for sites like Netflix which have dozens of URLs associated with them. If anyone has any suggestions on how I might combine results with say a wildcard (*), I'd love to hear from you!
SOARt_of_Lost,   Thanks for the reply.  The whole VPE is kinda clunky, but I guess that's what part of the SOAR is for is to provide a visual programming interface. I ended up writing a python m... See more...
SOARt_of_Lost,   Thanks for the reply.  The whole VPE is kinda clunky, but I guess that's what part of the SOAR is for is to provide a visual programming interface. I ended up writing a python module and installed it via the backend procedure with pip.
Looks closer     
We've been collecting data with the inputs add-on (Input Add On for SentinelOne App For Splunk) for several years now.  The applications channel has always been a bit problematic with the collection ... See more...
We've been collecting data with the inputs add-on (Input Add On for SentinelOne App For Splunk) for several years now.  The applications channel has always been a bit problematic with the collection process running for several days but now we haven't seen any data since Monday February 19th around 5:00 PM. It's February 22nd and we generally see applications data every day. We started seeing errors on February 16th error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="223" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" And have seen a few errors since then error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="188" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="188" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" After noting the following in the release notes Improvements ... -- Applications input uses a new S1 API endpoint to reduce load on ingest. we upgraded the add-on from version 5.19 to version 5.20. Now we're seeing the following messages in the sentinelone-modularinput.log 2024-02-22 13:40:02,171 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="630" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=saving_checkpoint msg='not saving checkpoint in case there was a communication error' start=1708026001000 items_found=0 channel=applications 2024-02-22 13:40:01,526 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="599" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=calling_applications_channel status=start start=1708026001000 start_length=13 start_type=<class 'str'> end=1708630801000 end_length=13 end_type=<class 'str'> checkpoint=1708026001.525169 channel=applications 2024-02-22 13:40:01,526 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="580" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=got_checkpoint checkpoint={'last_execution': 1708026001.525169} channel=applications last_execution=1708026001.525169 2024-02-22 13:40:01,525 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="565" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=got_checkpoint checkpoint={'last_execution': 1708026001.525169} channel=applications type=<class 'dict'> It appears that the input is running but we're not seeing any events.  We also noted the following in the documentation for version 5.2.0. sourcetype SentinelOne API Description ...     sentinelone:channel:applications web/api/v2.1/installed-applications Deprecated sentinelone:channel:applications web/api/v2.1/installed-applications Deprecated Does this mean that the input has been deprecated? If so, what does the statement "Applications input uses a new S1 API endpoint to reduce load on ingest." in the release notes mean?  And why is the Applications channel still an option when creating inputs through the Splunk IU? Any information you can provide on the application channel would be greatly appreciated. __PRESENT
This is very much a question of efficiency.  If you have a relatively small number of event 70 in a short period of time, but event 250 was some long time ago, using subsearch would be more efficient... See more...
This is very much a question of efficiency.  If you have a relatively small number of event 70 in a short period of time, but event 250 was some long time ago, using subsearch would be more efficient than retrieving both types of events for a long period of time. You also need to tell us which EventCode's give you User, which give you Active_User.  Assuming that EventCode 250 gives you Active_User but 70 gives you User, you can do something like | from datamodel:P3 | search EventCode=250 earliest=-1mon ``` earliest value for demonstration purpose only ``` [from datamodel:P3 | search EventCode=70 earliest=-1h ``` earliest value for demonstration purpose only ``` | stats values(User) as Active_User ``` assuming User is present in EventCode 70 to matche Active_User in EventCode 250 ]  
Do you mean index="webmethods_qa" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" Status=success OR STATUS=Success OR Status=Failure USCUSTOMERINV | timechart su... See more...
Do you mean index="webmethods_qa" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" Status=success OR STATUS=Success OR Status=Failure USCUSTOMERINV | timechart sum(TotalAmount) span=1mon by Sender_ID
i have log like this : 2024-02-22 12:49:38:344 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Marshall University-Unimarket TxnType=Invoi... See more...
i have log like this : 2024-02-22 12:49:38:344 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Marshall University-Unimarket TxnType=Invoice TotalAmount=-1916.83 Status=Success 2024-02-22 11:51:12:992 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Mammoth Bio via Coupa TxnType=Invoice TotalAmount=4190.67 Status=Success below query giving monthly total index="webmethods_qa" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" Status=success OR STATUS=Success OR Status=Failure USCUSTOMERINV | timechart sum(TotalAmount) span=1mon   but I need for each Receiver_ID how much invoice total for 1 months span like this :   how to do that?
We upgraded splunk then the app to 1.4.6, but kept getting the same errors. The solution was rather silly. It couldn't run python3.exe because the python installer named it python312.exe... renamed a... See more...
We upgraded splunk then the app to 1.4.6, but kept getting the same errors. The solution was rather silly. It couldn't run python3.exe because the python installer named it python312.exe... renamed and the app started working.