All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you @sainag_splunk .. Then what about inputs.conf and outputs.conf (i believe it will not be there considering it's indexer) in indexer cluster should be configured? We have deployment server... See more...
Thank you @sainag_splunk .. Then what about inputs.conf and outputs.conf (i believe it will not be there considering it's indexer) in indexer cluster should be configured? We have deployment server as well. Can you please let me know where it will be there in picture?
I have a heavy forwarder, where all security devices logs have been pointed to HF, and from HF logs have been forwarded to Indexer, but as we don't have access for Indexer & Search Head. I want to... See more...
I have a heavy forwarder, where all security devices logs have been pointed to HF, and from HF logs have been forwarded to Indexer, but as we don't have access for Indexer & Search Head. I want to validate, that configuration done on HF for forwarded the particular types logs has is getting in the Indexer, How do i can verify that all logs are forwarding to indexer. As can be observed in splunkd.log "TcpOutEloop" it shows the HF is connected to Indexer, where we can validate related to configuration for indexer. is there any way to validate? My security device logs which are pointed to HF, are forwarding to Indexer.
@ITWhisperer  I make the below query but my processed count is coming as blank index="abc" sourcetype=600000304_gg_abs_ipc2 "Total records processed -" | rex "Total records processed -(?<processe... See more...
@ITWhisperer  I make the below query but my processed count is coming as blank index="abc" sourcetype=600000304_gg_abs_ipc2 "Total records processed -" | rex "Total records processed -(?<processed>\d+)" | timechart span=1d values(processed) AS ProcessedCount   Raw logs 2024-10-23 20:40:23.658 [INFO ] [pool-2-thread-1] ArchivalProcessor - Total records processed - 15618
@sainag_splunk I selected below options, this made the settings hidden but the search option became unavailable to the user?  I want below two options also make available to user.    
in the outer query i am trying to pull  the ORDERS which is Not available .I need to match the ORDERS  which is Not available to with the ORDERS on Sub query.  Result to be displayed  ORDERS  & UNI... See more...
in the outer query i am trying to pull  the ORDERS which is Not available .I need to match the ORDERS  which is Not available to with the ORDERS on Sub query.  Result to be displayed  ORDERS  & UNIQUEID .  common field in two query is ORDERS  my requirement is to use the combine two log statements  on "ORDERS"  and pull the ORDER and UNIQUEID in table  .   Below is the query i am using , but the result is pulling all ORDERS.  i want only the ORDERS and UNIQUEID from subquery to be displayed  which matches the ORDERS those  Not available in the first query     index=source "status for : * | "status for : * " AND "Not available" | rex field=_raw "status for : (?<ORDERS>.*?)" | join ORDERS [search Message=Request for : * | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\""] | table ORDERS UNIQUEID
@ITWhisperer please find my below query   index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log""StatisticBalancer - statisticData: StatisticData" "CARS... See more...
@ITWhisperer please find my below query   index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log""StatisticBalancer - statisticData: StatisticData" "CARS.UNB."|rex "totalOutputRecords=(?&lt;totalOutputRecords&gt;),busDt=(?&lt;busDt&gt;),fileName=(?&lt;fileName&gt;),totalAchCurrOutstBalAmt=(?&lt;totalAchCurrOutstBalAmt&gt;),totalAchBalLastStmtAmt=(?&lt;totalAchBalLastStmtAmt&gt;),totalClosingBal=(?&lt;totalClosingBal&gt;),totalRecordsWritten=(?&lt;totalRecordsWritten&gt;),totalRecords=(?&lt;totalRecords&gt;)"|eval totalAchCurrOutstBalAmt=tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchCurrOutstBalAmt,"E"),1)))|eval totalAchBalLastStmtAmt=tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),0)) * pow(10,tonumber(mvindex(split(totalAchBalLastStmtAmt,"E"),1)))|eval totalClosingBal=tonumber(mvindex(split(totalClosingBal,"E"),0)) * pow(10,tonumber(mvindex(split(totalClosingBal,"E"),1)))|table busDt fileName totalAchCurrOutstBalAmt totalAchBalLastStmtAmt totalClosingBal totalRecordsWritten totalRecords|appendcols[search index="600000304_d_gridgain_idx*"sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" | rex "CARS\.UNB(CTR)?\.(?&lt;CARS_ID&gt;\w+)" | transaction CARS_ID startswith="Reading Control-File /absin/CARS.UNBCTR." endswith="Completed Settlement file processing, CARS.UNB." |eval StartTime=min(_time)|eval EndTime=StartTime+duration|eval duration_min=floor(duration/60) |rename duration_min as CARS.UNB_Duration| table StartTime EndTime CARS.UNB_Duration]| fieldformat StartTime = strftime(StartTime, "%F %T.%3N")| fieldformat EndTime = strftime(EndTime, "%F %T.%3N")|appendcols[search index="abc" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "FileEventCreator - Completed Settlement file processing" "CARS.UNB."|rex "FileEventCreator - Completed Settlement file processing, (?&lt;file&gt;[^ ]*) records processed: (?&lt;records_processed&gt;\d+)"| rename file as Files|rename records_processed as Records| table Files Records]|appendcols[search index="600000304_d_gridgain_idx*" sourcetype =$Regions$ source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | head 7 | eval True=if(searchmatch("ebnc event balanced successfully"),"✔","") | eval EBNCStatus="ebnc event balanced successfully" | table EBNCStatus True]|rename busDt as Business_Date|rename fileName as File_Name|rename CARS.UNB_Duration as CARS.UNB_Duration(Minutes)|table Business_Date File_Name StartTime EndTime CARS.UNB_Duration(Minutes) Records totalClosingBal totalRecordsWritten totalRecords EBNCStatus | sort 0 'Business_Date' 'StartTime'
I'll try to answer in order of your response. $tok_searchfieldvalue$ is only displayed in the panel to visually demonstrate to you how the tokens change and update as you flip the radial button.  Wh... See more...
I'll try to answer in order of your response. $tok_searchfieldvalue$ is only displayed in the panel to visually demonstrate to you how the tokens change and update as you flip the radial button.  When transferring the concept to your dashboard you will use it differently.  Possibly to replace a large portion of your search or which search to run.  You mentioned the SPL changed based on a version threshold above or below. Yes you can have more choices for radio buttons but very quickly the radio input will get crowded and word wrap.  You can have an input which is just a single select drop down if you want to do each specific version number as it's own option.  I only demonstrated radio button as your OP indicated only 2 searches(SPL) to pick from so it works visually ok with that. Yes you can use the choices to trigger panel hide and seek, but that is more advanced.  Not impossible but best to start small, you can only eat an elephant one bite at a time.
When i enable [WinEventLog] persistentQueueSize=5GB   in the windows_ta, all event flow stops. I see the queue file created in var/run/splunk/exec but no events are indexed. I remove that st... See more...
When i enable [WinEventLog] persistentQueueSize=5GB   in the windows_ta, all event flow stops. I see the queue file created in var/run/splunk/exec but no events are indexed. I remove that stanza, and events flow again...
Hello team, We want to run some custom code inside Splunk SOAR that utilize the pandas python package. We can already install the pandas and use it using the below commands:   sudo su - phantom ... See more...
Hello team, We want to run some custom code inside Splunk SOAR that utilize the pandas python package. We can already install the pandas and use it using the below commands:   sudo su - phantom /opt/phantom/bin/phenv pip install pandas   After installing we can use pandas in custom functions just fine.   I want to ask if this is good or can it lead to any compatibility issue in the future? (e.g. SOAR upgrades)   Thanks in advance!
Hi Rick, Makes sense, thanks a lot for your help. 
Agree with @richgalloway This should be highlighted to Support as its Splunk Supported Add-on.
I want to monitor AWS logs sources with various account when ever logs stopped coming for particular sourcetype i need alert for specific  accounts i have tried some thing like this but its not picki... See more...
I want to monitor AWS logs sources with various account when ever logs stopped coming for particular sourcetype i need alert for specific  accounts i have tried some thing like this but its not picking right away so any suggested SPL will be apricated ( not sure we can use Tstat so it will be much faster )       index=aws sourcetype="aws:cloudtrail" aws_account_id IN(991650019 55140 5557 39495836 157634 xxxx9015763) | eval now=now() | eval time_since_last=round(((now-Latest)/60)/60,2) | stats latest(_time) as last_event_time, earliest(_time) as first_event_time count by sourcetype aws_account_id | eval time_gap = last_event_time - first_event_time | where time_gap > 4000 | table aws_account_id first_event_time last_event_time time_gap | convert ctime(last_event_time)  
Hello @niketn ,  Thanks your help and contributions to the Splunking community.  I have a question for you: Can we remove the categories from a Splunk Dashboard made with the Dashboard Studio app? ... See more...
Hello @niketn ,  Thanks your help and contributions to the Splunking community.  I have a question for you: Can we remove the categories from a Splunk Dashboard made with the Dashboard Studio app? Since they're json I tried using "xAxisText":"" to no avail.    Thanks in advance!
Thanks all,    I've split out the Forwarded events and subscriptions to be more granular. And the dedicated sysmon channel + the TA is working well. I think we're roughly running 9 minutes behind.... See more...
Thanks all,    I've split out the Forwarded events and subscriptions to be more granular. And the dedicated sysmon channel + the TA is working well. I think we're roughly running 9 minutes behind. which isn't too bad, but i want to ensure we don't miss any logs. I'm still collecting some event IDs, but not seeing them in Splunk at all. I am seeing them in other solutions. Can i increase the cache size of the universal forwarder itself? I've increased the persistentCacheSize to 10GB, but unsure if i've set this property correctly or if it impacts the windows_TA Thanks
@uagraw01 Please refer this https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Authorizeconf Based on what I see the role might have inherited "admin_all_objects" from a different role. & al... See more...
@uagraw01 Please refer this https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Authorizeconf Based on what I see the role might have inherited "admin_all_objects" from a different role. & also check “edit_own_objects” and “list_all_objects” capabilities [capability::admin_all_objects] * Lets a user access all objects in the system, such as user objects and knowledge objects. * Lets a user bypass any Access Control List (ACL) restrictions, similar to the way root access in a *nix environment does. * the Splunk platform checks this capability when accessing manager pages and objects.   Use this    ./splunk btool authorize list role_Splunk_engineer --debug   If this helps, please upvote.
Data Flow: Data goes DIRECTLY from UF to indexers on port 9997 (not to cluster manager) Cluster Manager only handles configuration distribution Configuration Management: Props and transforms ... See more...
Data Flow: Data goes DIRECTLY from UF to indexers on port 9997 (not to cluster manager) Cluster Manager only handles configuration distribution Configuration Management: Props and transforms configs are deployed via cluster manager These configs are pushed to index peers via index cluster bundle Processing Location: All parsing happens on the indexers (index peers) Each indexer applies the deployed configurations independently For Deep Understanding: Refer: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 Review props.conf documentation: docs.splunk.com/Documentation/Splunk/9.1.0/Admin/Propsconf  docs.splunk.com/Documentation/ITSI/4.17.0/Configure/transforms.conf Since there are many pipeline components, I encourage you to read through these resources for a complete understanding. Simple Data Flow here. If this Helps, Please Upvote.
(Attaching the updated docs link since my prior comment, as the URL has changed: https://dev.splunk.com/enterprise/docs/developapps/createapps/buildapps/adduithemes) @hettervik could you please conf... See more...
(Attaching the updated docs link since my prior comment, as the URL has changed: https://dev.splunk.com/enterprise/docs/developapps/createapps/buildapps/adduithemes) @hettervik could you please confirm that the search page in your app context is displaying correctly (ex. /en-US/app/MyCustomApp/search)? Also, curious what pages specifically are light mode for your app that you're expecting to be dark? Custom app pages will require additional updates for related files.
Please can you show an example of where the events are not sorted by these two fields?
How do you know which response is related to which request?