All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@hiepdao whilst on-prem it should be fine but you may need to check if the lib ever needs an update.  The best practise, especially if you ever move to Cloud SOAR, would be to create an app for the ... See more...
@hiepdao whilst on-prem it should be fine but you may need to check if the lib ever needs an update.  The best practise, especially if you ever move to Cloud SOAR, would be to create an app for the actions requiring pandas and then package the pandas .whl file as a dependency to make it more portable. 
Please share some anonymised sample events in code blocks (using the </> button above) so we can see what you are dealing with.
Please give a detailed example of what you want showing why where uptime=0 doesn't work for you.
Please show your raw event in a codeblock (using the </> button)
Please show the results not the search
Hi @Athira , try to follow my approach using stats instead join applied to your conditions: index=source "status for : * "Not available" | rex "status for : (?<ORDERS>.*?)" | append [ search ... See more...
Hi @Athira , try to follow my approach using stats instead join applied to your conditions: index=source "status for : * "Not available" | rex "status for : (?<ORDERS>.*?)" | append [ search Message="Request for : *" | rex "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\""] | stats values(UNIQUEID) AS UNIQUEID BY ORDERS if you have more values for UNIQUEID and you want a row foreach one, you can add the statement | mvexpand UNIQUEID. As I said, this solution has only one limit: the subsearch must return maximun 50,000 results. Ciao. Giuseppe
Update: since Splunk 9.2 Regex_cpu_profiling  in limits.conf default value is true. regex_cpu_profiling = <boolean> * Enable CPU time metrics for RegexProcessor. Output will be in the metrics.log... See more...
Update: since Splunk 9.2 Regex_cpu_profiling  in limits.conf default value is true. regex_cpu_profiling = <boolean> * Enable CPU time metrics for RegexProcessor. Output will be in the metrics.log file. Entries in metrics.log will appear per_host_regex_cpu, per_source_regex_cpu, per_sourcetype_regex_cpu, per_index_regex_cpu. * Default: true    
Hello ITW, thank you for reply. Where Uptime=0 won´t resolve it because during 1 day span some component_hostnames been uptime for few seconds e.g. 1.0000 or 5.0000. This means it can´t be counted... See more...
Hello ITW, thank you for reply. Where Uptime=0 won´t resolve it because during 1 day span some component_hostnames been uptime for few seconds e.g. 1.0000 or 5.0000. This means it can´t be counted as permanent downtime.  My query should be looking only for component_hostnames  which had no different Uptime except of 0 in span of 1 day. Stives  
Thank you @sainag_splunk .. Then what about inputs.conf and outputs.conf (i believe it will not be there considering it's indexer) in indexer cluster should be configured? We have deployment server... See more...
Thank you @sainag_splunk .. Then what about inputs.conf and outputs.conf (i believe it will not be there considering it's indexer) in indexer cluster should be configured? We have deployment server as well. Can you please let me know where it will be there in picture?
I have a heavy forwarder, where all security devices logs have been pointed to HF, and from HF logs have been forwarded to Indexer, but as we don't have access for Indexer & Search Head. I want to... See more...
I have a heavy forwarder, where all security devices logs have been pointed to HF, and from HF logs have been forwarded to Indexer, but as we don't have access for Indexer & Search Head. I want to validate, that configuration done on HF for forwarded the particular types logs has is getting in the Indexer, How do i can verify that all logs are forwarding to indexer. As can be observed in splunkd.log "TcpOutEloop" it shows the HF is connected to Indexer, where we can validate related to configuration for indexer. is there any way to validate? My security device logs which are pointed to HF, are forwarding to Indexer.
@ITWhisperer  I make the below query but my processed count is coming as blank index="abc" sourcetype=600000304_gg_abs_ipc2 "Total records processed -" | rex "Total records processed -(?<processe... See more...
@ITWhisperer  I make the below query but my processed count is coming as blank index="abc" sourcetype=600000304_gg_abs_ipc2 "Total records processed -" | rex "Total records processed -(?<processed>\d+)" | timechart span=1d values(processed) AS ProcessedCount   Raw logs 2024-10-23 20:40:23.658 [INFO ] [pool-2-thread-1] ArchivalProcessor - Total records processed - 15618
@sainag_splunk I selected below options, this made the settings hidden but the search option became unavailable to the user?  I want below two options also make available to user.    
in the outer query i am trying to pull  the ORDERS which is Not available .I need to match the ORDERS  which is Not available to with the ORDERS on Sub query.  Result to be displayed  ORDERS  & UNI... See more...
in the outer query i am trying to pull  the ORDERS which is Not available .I need to match the ORDERS  which is Not available to with the ORDERS on Sub query.  Result to be displayed  ORDERS  & UNIQUEID .  common field in two query is ORDERS  my requirement is to use the combine two log statements  on "ORDERS"  and pull the ORDER and UNIQUEID in table  .   Below is the query i am using , but the result is pulling all ORDERS.  i want only the ORDERS and UNIQUEID from subquery to be displayed  which matches the ORDERS those  Not available in the first query     index=source "status for : * | "status for : * " AND "Not available" | rex field=_raw "status for : (?<ORDERS>.*?)" | join ORDERS [search Message=Request for : * | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\""] | table ORDERS UNIQUEID
@ITWhisperer please find my below query   index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log""StatisticBalancer - statisticData: StatisticData" "CARS... See more...
@ITWhisperer please find my below query   index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log""StatisticBalancer - statisticData: StatisticData" "CARS.UNB."|rex "totalOutputRecords=(?&lt;totalOutputRecords&gt;),busDt=(?&lt;busDt&gt;),fileName=(?&lt;fileName&gt;),totalAchCurrOutstBalAmt=(?&lt;totalAchCurrOutstBalAmt&gt;),totalAchBalLastStmtAmt=(?&lt;totalAchBalLastStmtAmt&gt;),totalClosingBal=(?&lt;totalClosingBal&gt;),totalRecordsWritten=(?&lt;totalRecordsWritten&gt;),totalRecords=(?&lt;totalRecords&gt;)"|eval totalAchCurrOutstBalAmt=tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),1)))|eval totalAchBalLastStmtAmt=tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),1)))|eval totalClosingBal=tonumber(mvindex(split(totalClosingBal,"E"),0)) * pow(10,tonumber(mvindex(split(totalClosingBal,"E"),1)))|table busDt fileName totalAchCurrOutstBalAmt totalAchBalLastStmtAmt totalClosingBal totalRecordsWritten totalRecords|appendcols[search index="600000304_d_gridgain_idx*"sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "CARS\.UNB(CTR)?\.(?&lt;CARS_ID&gt;\w+)" | transaction CARS_ID startswith="Reading Control-File /absin/CARS.UNBCTR." endswith="Completed Settlement file processing, CARS.UNB." |eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as CARS.UNB_Duration| table StartTime EndTime CARS.UNB_Duration]| fieldformat StartTime = strftime(StartTime, "%F %T.%3N")| fieldformat EndTime = strftime(EndTime, "%F %T.%3N")|appendcols[search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "FileEventCreator - Completed Settlement file processing" "CARS.UNB."|rex "FileEventCreator - Completed Settlement file processing, (?&lt;file&gt;[^ ]*) records processed: (?&lt;records_processed&gt;\d+)"| rename file as Files|rename records_processed as Records| table Files Records]|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | head 7 | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True]|rename busDt as Business_Date|rename fileName as File_Name|rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes)|table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus | sort 0 'Business_Date' 'StartTime'
I'll try to answer in order of your response. $tok_searchfieldvalue$ is only displayed in the panel to visually demonstrate to you how the tokens change and update as you flip the radial button.  Wh... See more...
I'll try to answer in order of your response. $tok_searchfieldvalue$ is only displayed in the panel to visually demonstrate to you how the tokens change and update as you flip the radial button.  When transferring the concept to your dashboard you will use it differently.  Possibly to replace a large portion of your search or which search to run.  You mentioned the SPL changed based on a version threshold above or below. Yes you can have more choices for radio buttons but very quickly the radio input will get crowded and word wrap.  You can have an input which is just a single select drop down if you want to do each specific version number as it's own option.  I only demonstrated radio button as your OP indicated only 2 searches(SPL) to pick from so it works visually ok with that. Yes you can use the choices to trigger panel hide and seek, but that is more advanced.  Not impossible but best to start small, you can only eat an elephant one bite at a time.
When i enable [WinEventLog] persistentQueueSize=5GB   in the windows_ta, all event flow stops. I see the queue file created in var/run/splunk/exec but no events are indexed. I remove that st... See more...
When i enable [WinEventLog] persistentQueueSize=5GB   in the windows_ta, all event flow stops. I see the queue file created in var/run/splunk/exec but no events are indexed. I remove that stanza, and events flow again...
Hello team, We want to run some custom code inside Splunk SOAR that utilize the pandas python package. We can already install the pandas and use it using the below commands:   sudo su - phantom ... See more...
Hello team, We want to run some custom code inside Splunk SOAR that utilize the pandas python package. We can already install the pandas and use it using the below commands:   sudo su - phantom /opt/phantom/bin/phenv pip install pandas   After installing we can use pandas in custom functions just fine.   I want to ask if this is good or can it lead to any compatibility issue in the future? (e.g. SOAR upgrades)   Thanks in advance!
Hi Rick, Makes sense, thanks a lot for your help. 
Agree with @richgalloway This should be highlighted to Support as its Splunk Supported Add-on.
I want to monitor AWS logs sources with various account when ever logs stopped coming for particular sourcetype i need alert for specific  accounts i have tried some thing like this but its not picki... See more...
I want to monitor AWS logs sources with various account when ever logs stopped coming for particular sourcetype i need alert for specific  accounts i have tried some thing like this but its not picking right away so any suggested SPL will be apricated ( not sure we can use Tstat so it will be much faster )       index=aws sourcetype="aws:cloudtrail" aws_account_id IN(991650019 55140 5557 39495836 157634 xxxx9015763) | eval now=now() | eval time_since_last=round(((now-Latest)/60)/60,2) | stats latest(_time) as last_event_time, earliest(_time) as first_event_time count by sourcetype aws_account_id | eval time_gap = last_event_time - first_event_time | where time_gap > 4000 | table aws_account_id first_event_time last_event_time time_gap | convert ctime(last_event_time)