All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day, I have a query to summarize data per week. Is there a way to display my tables in a better way as my dates for the path month would just be the dates in number format?  I would like t... See more...
Good day, I have a query to summarize data per week. Is there a way to display my tables in a better way as my dates for the path month would just be the dates in number format?  I would like to name the table Week 1, Week 2, Week 3 etc if possible. index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, _time | stats count by user app _time | chart count by app _time span=1w | sort app 0
Hi Splunkers, i have been working on a dashboard for that I need the data for last 7 months from jan 2024 to till date when i was searching for the logs it was only showing for the last 3 months dat... See more...
Hi Splunkers, i have been working on a dashboard for that I need the data for last 7 months from jan 2024 to till date when i was searching for the logs it was only showing for the last 3 months data i.e., from 10, jun to till date and gradually all the logs are disappearing is there any way to fix this... i tried this query  | tstats earliest(_time) as first, latest(_time) as last where index=foo | fieldformat first=strftime(first,"%c") | fieldformat last=strftime(last,"%c") the result shows index="my-index"                 first                                                       last  Mon Jun 10 04:19:23 2024     Tue Aug 27 07:50:04 2024
Hello, I want to write a suppression in Splunk ES that suppresses an event if a specific process occurs at 11 AM every day. This limitation should be applied to the raw logs because the ES rul... See more...
Hello, I want to write a suppression in Splunk ES that suppresses an event if a specific process occurs at 11 AM every day. This limitation should be applied to the raw logs because the ES rules execute within a specific time cycle and create notable events. My goal is to suppress the event when the rule runs, but only if the specific process exists at 11 AM. How can I apply this time constraint in the suppression? Can I do this through the search I write? How? How can I implement this time constraint on raw data? I need to limit the time in the raw event.  
Hi Team, One of our customer reported that he was finding duplicate records in splunk ( duplicate files and duplicate data in files). We want to simulate the scenario in our lab. If someone can help... See more...
Hi Team, One of our customer reported that he was finding duplicate records in splunk ( duplicate files and duplicate data in files). We want to simulate the scenario in our lab. If someone can help to write SPL to find duplicate records.   Regards, Alankrit
I signed up to Splunk or Storm and had to accept the commercial emails to finalize my sign-up. How to unsubscribe to the emails ?  
When i want to enable use case "ESCU - Windows Gather Victim Host Information Camera - Rule" the query in corellation search is like this  index=* source=WinEventLog:Microsoft-Windows-PowerShell... See more...
When i want to enable use case "ESCU - Windows Gather Victim Host Information Camera - Rule" the query in corellation search is like this  index=* source=WinEventLog:Microsoft-Windows-PowerShell/Operational OR source="XmlWinEventLog:Microsoft-Windows-PowerShell/Operational" EventCode=* ScriptBlockText= "* Win32_PnPEntity *" ScriptBlockText= "*SELECT*" ScriptBlockText= "*WHERE*" ScriptBlockText = "*PNPClass*" ScriptBlockText IN ("*Image*", "*Camera*") | stats count min(_time) as firstTime max(_time) as lastTime by EventCode ScriptBlockText Computer UserID | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_gather_victim_host_information_camera_filter`    From this query it calling ScriptBlockText field but when i check it in all fields i only can find ScriptBlock_ID fields   Question is how i can get field ScriptBlockText ? because when i explore more Use Case too much Correlation Search using ScriptBlockText field.   Thankyou
Is there a way to reference or combine multiple fields into a single name so that it can be referenced by that new name? For example:   somefield IN (a,b,c,d) If I  run  a query for "somefield" I g... See more...
Is there a way to reference or combine multiple fields into a single name so that it can be referenced by that new name? For example:   somefield IN (a,b,c,d) If I  run  a query for "somefield" I get "a", "b", "c", "d" returned. I want to be able to refer to "somefield" by a single name.  Is that possible? So if run a query for "somefield", I would get the aggregate results of a,b,c,d ?
So, I want the the distinct count of user_numbers by device, but in the same chat/table, I want the distinct count of all the user_numbers in the last column  called total is this possible to get a s... See more...
So, I want the the distinct count of user_numbers by device, but in the same chat/table, I want the distinct count of all the user_numbers in the last column  called total is this possible to get a stats formula with different fields in the by?  This is some thing that I have: | stats dc(user_numbers) by device but I also want to show in the same table as the total: | stats dc(user_numbers) right now I count duplicates and show this:  | addcoltotals label="Total Members" labelfield=device I really hope this is possible! Thank you!
Ai to assist in creating valid regex expressions would be super helpful.
We are on Splunk Cloud 9.1 Has anyone successfully been able to ingest data from sendgrid into splunk? It looks like the only option they have is for a webhook that requires a URL to send to.  I ... See more...
We are on Splunk Cloud 9.1 Has anyone successfully been able to ingest data from sendgrid into splunk? It looks like the only option they have is for a webhook that requires a URL to send to.  I am no Splunk wizard so I may just be missing the easy answer here. But I can't find a way to generate a URL for SendGrid to sent into Splunk Cloud.
I am looking to record a measurement which is taken after the transition from Home state to Diagnostic State, I am calling the state change the keystone event the raw keystone event looks like so: ... See more...
I am looking to record a measurement which is taken after the transition from Home state to Diagnostic State, I am calling the state change the keystone event the raw keystone event looks like so: {"bootcount":26,"device_id":"X","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC", "location":{"city":"X","country":"X","latitude":X,"longitude":X,"state":"X"},"log_level":"info", "message":"client: GCL internal state { new_state: Diagnostic, old_state: Home, conditions: 65600, error_code: 0}", "model_number":"X1","sequence":274,"serial":"123X","software_version":"2.3.1.7682","ticks":26391,"timestamp":1723254756}  my search to find the keystone event looks like: index="june_analytics_logs_prod" message=* new_state: Diagnostic, old_state: Home* NOT message=*counts*|  After the keystone event, I would like to take the measurements found in the immediate next 5 events, i will call these the data events. the raw data events look like: {"bootcount":26,"device_id":"x","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC", "location":{"city":"X","country":"X","latitude":X,"longitude":X,"state":"X"},"log_level":"info", "message":"client: fan: 2697, auger: 1275, glow_v: 782, glow: false, fuel: 0, cavity_temp: 209", "model_number":"X1","sequence":280,"serial":"123X","software_version":"2.3.1.7682","ticks":26902,"timestamp":1723254761} I would like to take the first 5 data events directly after the keystone event, extract the glow_v value and take the median of these 5 values as the accepted value.   In short, want to build a query to find the time of a keystone event, use this time to find the immediately proceeding data events that match certain criteria, extract the glow_v value from these data events and then take the median of these glow_v values
Hi Splunkers, I'm trying to get diskusage for searches running by user. | rest /services/search/jobs | rex field=eventSearch "index\s*=(?<index>[^,\s)]+)" | search index=$ind$ | eval size_MB = d... See more...
Hi Splunkers, I'm trying to get diskusage for searches running by user. | rest /services/search/jobs | rex field=eventSearch "index\s*=(?<index>[^,\s)]+)" | search index=$ind$ | eval size_MB = diskUsage/1024/1024 | stats sum(size_MB) as size_MB by author | rename author as user Is there a way to get diskusage for historical log's like for a month or more. ?
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to... See more...
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to view that Messages in the alert title. We tried giving Splunk Alert: $result.Message$, here only 1 message is showing up not all. how can we do it??? Query: index=app-index "ERROR" |eval Message=case( like(_raw, "%internal error system%"), "internal error system", like(_raw, "%connection timeout error%"), "connection timeout error", like(_raw, "%connection error%"), "connection error", like(_raw, "%unsuccessfull application%"), "unsuccessfull application", like(_raw, "%error details app%"), "error details app", 1=1, null()) |stats count by Message |eval error=case( Message="internal error system" AND count >0,1, Message="connection timeout error" AND count >0,1, Message="connection error" AND count >0,1, Message="unsuccessfull application" AND count >0,1, Message="error details app" AND count >0,1) |search error=1  
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice ... See more...
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice regarding Universal Forwarders? Set up two universal forwarders on same app server, or, set up and configure a single UF to forward to both Ent Servers?
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created ... See more...
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created when Splunk is writing from warm to cold but not much more than that.   So lets say that Splunk is writing buckets and its like 100 GB worth of buckets if lets say you had 3 indexers with buckets that were 3 months old and you forced all your buckets from these 3 indexers to move to cold. As long as they had write access to your storage should there even be inflight dbs? or is that too much to write at once and Splunk is like nah I dont think so. And therefore writes some data but the rest it just make some error log and calls it a day.   So is there a limit as to how much can be writen to cold at one time? If it is writing and that write gets interupted then why does it see that and just resume where it left off to complete the transfer? I know there are logs  but seems to me it should be like watching a movie with internet I should be able to pause and then resume when I'm ready or better yet if  100 buckets start writing and some technical issue happens at 1/4 or halfway then that bucket writing should either cancel full stop and tell me in plain language or pause and resume when connection is back up.
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points a... See more...
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points and RAID configuration for linux servers.
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this?  [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|Get... See more...
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this?  [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|GetNavigationCatalogFromSFTP com.demandware.api.net.SFTPClient Sites-ks_jp_rt-Site JOB faadaf233c 09beff21183cec83f264904132 5766054387038857216 - SFTP connect operation failed
Hello everyone, I'm trying to filter out some logs in the IA-WindowsSecurity Application. The indexed values are when: - The EventCode=4634 AND the Security_ID="*$" I created an app deployed ... See more...
Hello everyone, I'm trying to filter out some logs in the IA-WindowsSecurity Application. The indexed values are when: - The EventCode=4634 AND the Security_ID="*$" I created an app deployed on an index with the following props and transforms config: Props.conf [WinEventLog] TRANSFORMS-remove_computer_logoff = remove_logoff Transforms.conf [remove_logoff] REGEX = DEST_KEY = queue FORMAT = nullQueue I made the following regex for matching the event: - EventCode=4634 - Security_ID=".*\$$" I'm not sure how to correctly "put together" these two REGEXES. I did a lot of testing with different types of regexes (in PCRE Format), but I wasn't able to make it work.   Can someone please help me? Thanks in advance
I need to add the total GB.  Please let me know how to add the over all total.  Index                Source-Type              GB aws_vpcflow   - aws:vpcflow    26192.00305                         ... See more...
I need to add the total GB.  Please let me know how to add the over all total.  Index                Source-Type              GB aws_vpcflow   - aws:vpcflow    26192.00305                             -   aws:cloudwatchlogs:vpcflow 32.695269 windows     -     windows:fluentd     19939.02727                          -     windows                9713.832884                          -     WinEventLog:Security   8.928759