All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It possibly doesn't work if ClientName has not already been extracted as a field. Try it this way <your index> [| inputlookup <your lookup> | table ClientName | rename ClientName AS search] "Certifi... See more...
It possibly doesn't work if ClientName has not already been extracted as a field. Try it this way <your index> [| inputlookup <your lookup> | table ClientName | rename ClientName AS search] "Certificate was successfully validated"
Please share the events which are not working for you as the suggested solution works with the sample events you have provided so far | makeresults | eval _raw="the current status is START system go... See more...
Please share the events which are not working for you as the suggested solution works with the sample events you have provided so far | makeresults | eval _raw="the current status is START system goes on … the current status is STOP please do ….. the current status is PENDING." | multikv noheader=t | table _raw | rex "status is\s(?<status>[^\s]+)\s" It is usually best to provide accurate samples, it tends to reduce the amount of wasted time!
Thank you for you reply. For some reasons it doesn't work yet.       index=my_index [| inputlookup blank_clients_test.csv | table ClientName] "Certificate was successfully validated"        ... See more...
Thank you for you reply. For some reasons it doesn't work yet.       index=my_index [| inputlookup blank_clients_test.csv | table ClientName] "Certificate was successfully validated"        For test purposes in the blank_clients_test.csv I have just put a single ClientName  I get 0 results   When I search for the following:     index=my_index BB-H-282XY "Certificate was successfully validated"     I am getting a match in the event. What could be wrong? Does the second Column in the lookup table is also included? If, yes then it would not work. I want to exclude the second column in the lookup HWDetSystem   Thanks!
In that case, gather all the information for each user into a single row for that user or submit an idea to Splunk to try to get the functionality changed.
Advanced Bot Detected on Imperva WAF   Backdoor Detected on Imperva WAF  Bot Access Control Detected on Imperva WAF  Can anyone help me to find custom search queries for the above use cases? 
Hi @richgalloway , Is it possible to sort this according to the year? My search query i have set for last 3 months data, it is taking last years data. But in chart the calendar week 48,49 etc are co... See more...
Hi @richgalloway , Is it possible to sort this according to the year? My search query i have set for last 3 months data, it is taking last years data. But in chart the calendar week 48,49 etc are coming at the end. But they should be in the beginning. I am trying out some solutions but not able to do it properly. Do you know any suggestion for this?
Hi  I have query that return count of different resp codes of servers for 2 days now need to find different between these two days. current output: Respcodes    Srv1      Srv2       Srv3       Sr... See more...
Hi  I have query that return count of different resp codes of servers for 2 days now need to find different between these two days. current output: Respcodes    Srv1      Srv2       Srv3       Srv4  …. 200                    80           10          100       42 400                    12           55            11         0 500                     11           34             2          8 … expected output: Date.                  Respcodes    Srv1      Srv2       Srv3       Srv4  …. 2024/02/23  200                    80           10          100       42 2024/02/24  200                    70           19            11        11 2024/02/23  400                    12           55            11         0 2024/02/24  400                    44           14            46         89 2024/02/23   500                    11           34             2          8 2024/02/24   500                     11           34             2          9               …       if there is delta that calculate count of each server for two dates will be great! any idea? thanks
For a more robust solution have a look at the TrackMe app. Its free if you are not a big user of it.  Can be som work to set it up. https://splunkbase.splunk.com/app/4621
Hi @karthi2809, the join command kills the system, you can use it only with few data. You should use other choices, (stats command) , in addition you're joining the same search with similar conditi... See more...
Hi @karthi2809, the join command kills the system, you can use it only with few data. You should use other choices, (stats command) , in addition you're joining the same search with similar conditions. You can find many examples (also from me!) in the Community Anyway, it's difficoult to help you with this search because there are many missing parts (e.g. pipes) in the shared search, please share the correct one. Ciao. Giuseppe
Hi @BTB, at first, move the condition index=<my_index in the where condition not in the beginning of the search. Anyway, in my answer I missed a parenthesys in the sixth row, use this: | tstats ... See more...
Hi @BTB, at first, move the condition index=<my_index in the where condition not in the beginning of the search. Anyway, in my answer I missed a parenthesys in the sixth row, use this: | tstats count latest(_time) AS _time WHERE index=<my_index> earliest=-6d BY sourcetype | eval period=if(_time>now()-86400*3,"Last","Previous") | stats sum(eval(if(period="Last",count,0))) AS Last sum(eval(if(period="Previous",count,0))) AS Previous dc(period) AS period_count values(period) AS period BY sourcetype | eval diff_perc=(Last-Previous)/Previous*100 | where diff_perc<30  Ciao. Giuseppe
The 17 alerts and reports are not owned by admin.
I updated etc/apps/custom_app/local/data/ui/nav/default.xml with a new sub-menu choice that should display saved searches based on the name pattern: <collection label="Menu Choice">     <saved so... See more...
I updated etc/apps/custom_app/local/data/ui/nav/default.xml with a new sub-menu choice that should display saved searches based on the name pattern: <collection label="Menu Choice">     <saved source="unclassified" match="Pattern" /> </collection> where "Pattern" is some specific pattern which is in the name of 17 saved searches -- reports and alerts. The alerts are scheduled and reports are not. All the alerts and reports are shared in App, not Private. This newly added menu choice doesn't appear in the menu. The menu has many other working menu choices displaying dashboards or other saved searches. What could be the reason? I tried reloading the app and changing the ownership of one of the alerts and reports but it didn't help. I am logged in as admin. Thank you.
Thanks but unfortuately this does not work for me.  I'm still getting results for these:   ACTIVE PENDING. INACTIVE I only want ACTIVE and INACTIVE in this case.
Hi, I am writing a query in Splunk dashboard and the query return in base search it has multiple joint query. still the page is loading very slow. need to improve performance of dashboard query. Th... See more...
Hi, I am writing a query in Splunk dashboard and the query return in base search it has multiple joint query. still the page is loading very slow. need to improve performance of dashboard query. This my the query. index="Test" applicationName="sapi" timestamp log.map.correlationId level message ("Ondemand Started*" OR "Process star | rex field-message max_match=0 "\"Ondemand Started for. filename: (?<OnDemandFileName> [^\n]\w+\S+)" | rex field-message max_match=0 "Process started for (?<FileName>[^\n]+)" Ieval OnDemandFileName=rtrim(OnDemandFileName, "Job") Ieval "FileName/JobName"= coalesce (OnDemandFileName, FileName) | rename timestamp as Timestamp log.map.correlationId as CorrelationId level as Level message as Message eval JobType=case (like( 'Message', "%Ondemand Started%"), "OnDemand", like('Message", "Process started%"), "Scheduled", true (), "Unknown") eval Message=trim(Message, "\"") table Timestamp CorrelationId Level JobType "FileName/JobName" Message join CorrelationId type=left [search index="Test" applicationName="sapi" level=ERROR | rename log.map.correlationId as CorrelationId level as Level message as Messagel I dedup CorrelationId | table CorrelationId Level Message1] | table Timestamp CorrelationId Level JobType "FileName/JobName" Messagel I join CorrelationId type=left 20 [ search index="Test" applicationName="sapi" message="*file archived successfully *" | rex field-message max_match=0 "\"Concur file archived successfully for file name: (?<Archived FileName>[^\n]\w+\S+)" Ieval Archived FileName=rtrim(Archived FileName,"\"") I rename log.map.correlationId as CorrelationId | table CorrelationId ArchivedFileName] 1 table Timestamp CorrelationId Level JobType "FileName/JobName" ArchivedFileName Message1 join CorrelationId type=left [ search index="Test" applicationName="sapi" (log.map.processor Path=ExpenseExtractProcessingtoOraclex AND (" Import*" OR "APL Import*")) | rename timestamp as Timestamp1 log.map.correlationId as CorrelationId level as Level message as Message | eval Status-case (like('Message", "%GL Import flow%"), "SUCCESS", like('Message", "%APL Import flow%"), "SUCCESS", like('Level', "%Exception%"), "ERROR") | rename Message as Response | table Timestamp1 CorrelationId Status Response] Ieval Status=if (Level="ERROR", "ERROR", Status) Ieval StartTime=round(strptime(Timestamp, "%Y-%m-%dT%H:%M: %S.%QZ")) | eval EndTime=round(strptime (Timestamp1, "%Y-%m-%dT%H:%M: %S.%QZ")) Ieval Elapsed TimeInSecs-EndTime-StartTime | eval. "Total Elapsed Time"=strftime (Elapsed TimeInSecs, "%H:%M:%S") eval Response= coalesce (Response, Message1)|table Status CorrelationId ArchivedFileName] 1 table Timestamp CorrelationId Level JobType "FileName/JobName" ArchivedFileName |search Status=*|stats count by JobType
Hey there! I've set up Splunk Enterprise using AWS AMI. Now, I'm attempting to install the Splunk Essentials app, but I'm running into some issues. First, when I tried to upload the .tgz zip file, ... See more...
Hey there! I've set up Splunk Enterprise using AWS AMI. Now, I'm attempting to install the Splunk Essentials app, but I'm running into some issues. First, when I tried to upload the .tgz zip file, it got blocked. Then, I attempted to install it through the marketplace, but my correct username and password from splunk.com aren't working. I'm not sure how to fix this. Any help would be appreciated. Thanks!
You did remove the quotes in the second transform you posted Problem with your first regex, is that it hits both the one to remove and the one to keep. This may work:   NewProcessName.*?Teams\... See more...
You did remove the quotes in the second transform you posted Problem with your first regex, is that it hits both the one to remove and the one to keep. This may work:   NewProcessName.*?Teams\.exe<\/Data>.*?ParentProcessName   Looking for Teams.exe after NewProcessName and before ParentProcessNaneme Always test your regex, like this: https://regex101.com/r/v97Z1h/1 Edit: This may be faster, since it uses less steps to find the data:   NewProcessName[^<]+Teams\.exe<   Edit2 You can also set a sourcetype for the data you are trying to delete.  This way nothing are removed before you see that all is ok.  If sourcetype = ToDelete show correct data, then you can send it to nullQueue:   [4688cleanup] REGEX = NewProcessName[^<]+Teams\.exe< DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::ToDelete    
HI All,   We are trying to install Splunk through the chef script but the installation is stuck and timeout after 20 min. Command We ran as given below /opt/splunkforwarder/bin/splunk enable boot... See more...
HI All,   We are trying to install Splunk through the chef script but the installation is stuck and timeout after 20 min. Command We ran as given below /opt/splunkforwarder/bin/splunk enable boot-start --accept-license --no-prompt --answer-yes  When the Splunk installation script runs on the instance, it always hangs the first time, as the first screenshot. It will then work again if the command runs again in subsequent run. As shown in the 2nd screenshot. Note: After the 1st time it ran, the CPU went to 100%, and the Splunk process next existed. First Run:     Second Run:      
In the current project, we are sending application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The issue arises because we utilize ... See more...
In the current project, we are sending application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The issue arises because we utilize the cloudFrontID as a correlation ID to filter logs in Splunk, whereas SignalFx employs the traceId for log tracing. I am currently facing challenges in correlating the application logs' correlation ID with SignalFx's traceId. I attempted to address this issue by using the "Serilog.Enrichers.Span" NuGet package to log the TraceId and SpanId. However, no values were logged in Splunk. How can I access the TraceId generated by the OpenTelemetry Collector within the ASP.NET web application (Framework version: 4.7.2)? Let me know if further details are required from my end.
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false ... See more...
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false props.conf: [source::(...(usr/logs/Client*.log*))] sourcetype = auth_log My logs files pattern: Client_11.186.145.54:1_q1234567.log Client_11.186.145.54:1_q1234567.log.~~ Client_12.187.146.53:2_s1234567.log Client_12.187.146.53:2_s1234567.log.~~ Client_1.1.1.1:2_p1244567.log Client_1.1.1.1:2_p1244567.log.~~ In some of log files it starts with below line: ===== JLSLog: Maximum log file size is 5000000 and then log events So for this one i tried with below config one by one but nothing worked out adding crcSalt=<SOURCE> in monitor stanze, tried with adding SEDCMD in props.conf SEDCMD-removeheadersfooters=s/\=\=\=\=\=\sJLSLog:\s((Maximum\slog\sfile\ssize\sis\s\d+)|Initial\slog\slevel\sis\sLow)//g and tried with regex in transforms.conf transforms.conf [ignore_lines_starting_with_equals] REGEX = ^===(.*) DEST_KEY = queue FORMAT = nullQueue props.conf: [auth_log] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)=== TRANSFORMS-null = ignore_lines_starting_with_equals When i checked in splunkd logs there is no error captured and in list inputstatus it is showing                  percent = 100.00                 type = finished reading / open file please help me out of this issue if anyone faced before and fixed it. but the weird scenario is sometimes only  the first line of log file is indexed  ===== JLSLog: Maximum log file size is 5000000 host/server details: os: Solaris 10 splunk universal forwarder version 7.3.9 splunk enterprise version: 9.1.1 Here restriction is the host os cant be upgraded as of now so i need to strict on 7.3.9 splunk forwarder version.
how can i input zeek log and analyze data