All Topics

Top

All Topics

Hi Team, One of our customer reported that he was finding duplicate records in splunk ( duplicate files and duplicate data in files). We want to simulate the scenario in our lab. If someone can help... See more...
Hi Team, One of our customer reported that he was finding duplicate records in splunk ( duplicate files and duplicate data in files). We want to simulate the scenario in our lab. If someone can help to write SPL to find duplicate records.   Regards, Alankrit
I signed up to Splunk or Storm and had to accept the commercial emails to finalize my sign-up. How to unsubscribe to the emails ?  
When i want to enable use case "ESCU - Windows Gather Victim Host Information Camera - Rule" the query in corellation search is like this  index=* source=WinEventLog:Microsoft-Windows-PowerShell... See more...
When i want to enable use case "ESCU - Windows Gather Victim Host Information Camera - Rule" the query in corellation search is like this  index=* source=WinEventLog:Microsoft-Windows-PowerShell/Operational OR source="XmlWinEventLog:Microsoft-Windows-PowerShell/Operational" EventCode=* ScriptBlockText= "* Win32_PnPEntity *" ScriptBlockText= "*SELECT*" ScriptBlockText= "*WHERE*" ScriptBlockText = "*PNPClass*" ScriptBlockText IN ("*Image*", "*Camera*") | stats count min(_time) as firstTime max(_time) as lastTime by EventCode ScriptBlockText Computer UserID | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `windows_gather_victim_host_information_camera_filter`    From this query it calling ScriptBlockText field but when i check it in all fields i only can find ScriptBlock_ID fields   Question is how i can get field ScriptBlockText ? because when i explore more Use Case too much Correlation Search using ScriptBlockText field.   Thankyou
Is there a way to reference or combine multiple fields into a single name so that it can be referenced by that new name? For example:   somefield IN (a,b,c,d) If I  run  a query for "somefield" I g... See more...
Is there a way to reference or combine multiple fields into a single name so that it can be referenced by that new name? For example:   somefield IN (a,b,c,d) If I  run  a query for "somefield" I get "a", "b", "c", "d" returned. I want to be able to refer to "somefield" by a single name.  Is that possible? So if run a query for "somefield", I would get the aggregate results of a,b,c,d ?
So, I want the the distinct count of user_numbers by device, but in the same chat/table, I want the distinct count of all the user_numbers in the last column  called total is this possible to get a s... See more...
So, I want the the distinct count of user_numbers by device, but in the same chat/table, I want the distinct count of all the user_numbers in the last column  called total is this possible to get a stats formula with different fields in the by?  This is some thing that I have: | stats dc(user_numbers) by device but I also want to show in the same table as the total: | stats dc(user_numbers) right now I count duplicates and show this:  | addcoltotals label="Total Members" labelfield=device I really hope this is possible! Thank you!
Ai to assist in creating valid regex expressions would be super helpful.
We are on Splunk Cloud 9.1 Has anyone successfully been able to ingest data from sendgrid into splunk? It looks like the only option they have is for a webhook that requires a URL to send to.  I ... See more...
We are on Splunk Cloud 9.1 Has anyone successfully been able to ingest data from sendgrid into splunk? It looks like the only option they have is for a webhook that requires a URL to send to.  I am no Splunk wizard so I may just be missing the easy answer here. But I can't find a way to generate a URL for SendGrid to sent into Splunk Cloud.
I am looking to record a measurement which is taken after the transition from Home state to Diagnostic State, I am calling the state change the keystone event the raw keystone event looks like so: ... See more...
I am looking to record a measurement which is taken after the transition from Home state to Diagnostic State, I am calling the state change the keystone event the raw keystone event looks like so: {"bootcount":26,"device_id":"X","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC", "location":{"city":"X","country":"X","latitude":X,"longitude":X,"state":"X"},"log_level":"info", "message":"client: GCL internal state { new_state: Diagnostic, old_state: Home, conditions: 65600, error_code: 0}", "model_number":"X1","sequence":274,"serial":"123X","software_version":"2.3.1.7682","ticks":26391,"timestamp":1723254756}  my search to find the keystone event looks like: index="june_analytics_logs_prod" message=* new_state: Diagnostic, old_state: Home* NOT message=*counts*|  After the keystone event, I would like to take the measurements found in the immediate next 5 events, i will call these the data events. the raw data events look like: {"bootcount":26,"device_id":"x","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC", "location":{"city":"X","country":"X","latitude":X,"longitude":X,"state":"X"},"log_level":"info", "message":"client: fan: 2697, auger: 1275, glow_v: 782, glow: false, fuel: 0, cavity_temp: 209", "model_number":"X1","sequence":280,"serial":"123X","software_version":"2.3.1.7682","ticks":26902,"timestamp":1723254761} I would like to take the first 5 data events directly after the keystone event, extract the glow_v value and take the median of these 5 values as the accepted value.   In short, want to build a query to find the time of a keystone event, use this time to find the immediately proceeding data events that match certain criteria, extract the glow_v value from these data events and then take the median of these glow_v values
Hi Splunkers, I'm trying to get diskusage for searches running by user. | rest /services/search/jobs | rex field=eventSearch "index\s*=(?<index>[^,\s)]+)" | search index=$ind$ | eval size_MB = d... See more...
Hi Splunkers, I'm trying to get diskusage for searches running by user. | rest /services/search/jobs | rex field=eventSearch "index\s*=(?<index>[^,\s)]+)" | search index=$ind$ | eval size_MB = diskUsage/1024/1024 | stats sum(size_MB) as size_MB by author | rename author as user Is there a way to get diskusage for historical log's like for a month or more. ?
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to... See more...
We are using the below query for our alert,  when we receive mail we want to see MESSAGE in Alert title. In subject we give Splunk Alert: $name$. in this place when the alert is triggered we want to view that Messages in the alert title. We tried giving Splunk Alert: $result.Message$, here only 1 message is showing up not all. how can we do it??? Query: index=app-index "ERROR" |eval Message=case( like(_raw, "%internal error system%"), "internal error system", like(_raw, "%connection timeout error%"), "connection timeout error", like(_raw, "%connection error%"), "connection error", like(_raw, "%unsuccessfull application%"), "unsuccessfull application", like(_raw, "%error details app%"), "error details app", 1=1, null()) |stats count by Message |eval error=case( Message="internal error system" AND count >0,1, Message="connection timeout error" AND count >0,1, Message="connection error" AND count >0,1, Message="unsuccessfull application" AND count >0,1, Message="error details app" AND count >0,1) |search error=1  
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice ... See more...
We have a requirement to forward different data to multiple Splunk instances. In this case, security data is forwarded to EntServer2, where app data if forwarded to EntServer1. What is best practice regarding Universal Forwarders? Set up two universal forwarders on same app server, or, set up and configure a single UF to forward to both Ent Servers?
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created ... See more...
Data rolled to frozen directory is coming as inflight data and it showing size of it as 0. There are few details about inflight-db as too why they happen basically Splunk says that they are created when Splunk is writing from warm to cold but not much more than that.   So lets say that Splunk is writing buckets and its like 100 GB worth of buckets if lets say you had 3 indexers with buckets that were 3 months old and you forced all your buckets from these 3 indexers to move to cold. As long as they had write access to your storage should there even be inflight dbs? or is that too much to write at once and Splunk is like nah I dont think so. And therefore writes some data but the rest it just make some error log and calls it a day.   So is there a limit as to how much can be writen to cold at one time? If it is writing and that write gets interupted then why does it see that and just resume where it left off to complete the transfer? I know there are logs  but seems to me it should be like watching a movie with internet I should be able to pause and then resume when I'm ready or better yet if  100 buckets start writing and some technical issue happens at 1/4 or halfway then that bucket writing should either cancel full stop and tell me in plain language or pause and resume when connection is back up.
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points a... See more...
planning to deploy a splunk architecture with one SH and indexer cluster with 100GB/day data ingestion. Are there any recommendation documentations for OS partitions (paths with size), mount points and RAID configuration for linux servers.
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this?  [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|Get... See more...
I am new to regex. I want to just extract Catalog-Import from the below query.. can anyone help how i can do this?  [2024-08-22 12:55:56.439 GMT] ERROR CustomJobThread|1154761894|Catalog-Import|GetNavigationCatalogFromSFTP com.demandware.api.net.SFTPClient Sites-ks_jp_rt-Site JOB faadaf233c 09beff21183cec83f264904132 5766054387038857216 - SFTP connect operation failed
Combine the power of AppDynamics and Splunk Cloud Platform to pinpoint issues faster in traditional and hybrid application environments! With a single-click button, Log Observer Connect for AppDynami... See more...
Combine the power of AppDynamics and Splunk Cloud Platform to pinpoint issues faster in traditional and hybrid application environments! With a single-click button, Log Observer Connect for AppDynamics lets you zoom in on the relevant logs from AppDynamics in Splunk Cloud’s search and reporting interface. Log Observer Connect for AppDynamics uses deep linking from apps, tiers, nodes, business transactions, and transaction snapshots (traces) in AppDynamics to view the related logs in Splunk Cloud. That way, you can figure out what is happening in your application while maintaining context at a faster pace. What does AppDynamics do? AppDynamics comes from Cisco’s Observability solutions. It identifies potential application performance issues in your application and infrastructure stack before issues occur, and reveals business context that shows which issues could impact your business most. Key Benefits of Log Observer Connect for AppDynamics: Accelerate root cause analysis with a deeper context of events in your three-tier applications by integrating Splunk’s unmatched log management with AppDynamics Centralize your troubleshooting workflow and eliminate tool sprawl by seamlessly connecting two world-class observability products Reduce MTTR and MTTD when incidents arise in traditional environments with better visibility and shared insights across teams  Learn more about and take a guided tour of Log Observer Connect for AppDyanmics here! This is the first AppDynamics and Splunk Platform product integration release, but rest assured, much more is to come, so stay tuned!
We’re improving the look of Observability Cloud’s search capability. Search is a fast and easy way to navigate, and can also be a good way to discover available resources that are relevant to your us... See more...
We’re improving the look of Observability Cloud’s search capability. Search is a fast and easy way to navigate, and can also be a good way to discover available resources that are relevant to your use case, especially if you’re just getting started with Observability Cloud.  To access search, click the magnifying-glass icon in the top right corner of any page in Observability Cloud. Here’s how search in Observability looks after this update.   New look for search in Observability Cloud. Search displays results by category. This not only makes the results easy to read, but also helps you discover the different kinds of resources that are available in Observability Cloud. If a “more results” link is available, you can click it to see more matching results in that category. Category headers are also clickable, and take you to the ‘home’ destination for that category.  If you already know what kind of Observability resource you want to find, you can start your search with a prefix like ‘service:’ to focus on just one category. The product documentation, linked beneath the search bar, lists all supported prefixes. An example search using keyboard controls and a search prefix ("workflow:"). For efficiency and accessibility, you can search using only keyboard controls. Open search with the keyboard shortcut ctrl+shift+a, navigate the results list with arrow keys and enter, and dismiss the search with esc.  This update also removes items from search that users found less helpful: The shortcut to “Search metrics for …” no longer appears at the end of search results. You can still search for metrics by name, or navigate to Metric finder for more complex metrics searches. Actions like ‘Create Dashboard’ can be found in the Create menu, which you can open by clicking the ‘+’ icon that is directly to the right of the magnifying-glass icon for search.  Click here to learn more about search in Observability Cloud
Hello everyone, I'm trying to filter out some logs in the IA-WindowsSecurity Application. The indexed values are when: - The EventCode=4634 AND the Security_ID="*$" I created an app deployed ... See more...
Hello everyone, I'm trying to filter out some logs in the IA-WindowsSecurity Application. The indexed values are when: - The EventCode=4634 AND the Security_ID="*$" I created an app deployed on an index with the following props and transforms config: Props.conf [WinEventLog] TRANSFORMS-remove_computer_logoff = remove_logoff Transforms.conf [remove_logoff] REGEX = DEST_KEY = queue FORMAT = nullQueue I made the following regex for matching the event: - EventCode=4634 - Security_ID=".*\$$" I'm not sure how to correctly "put together" these two REGEXES. I did a lot of testing with different types of regexes (in PCRE Format), but I wasn't able to make it work.   Can someone please help me? Thanks in advance
I need to add the total GB.  Please let me know how to add the over all total.  Index                Source-Type              GB aws_vpcflow   - aws:vpcflow    26192.00305                         ... See more...
I need to add the total GB.  Please let me know how to add the over all total.  Index                Source-Type              GB aws_vpcflow   - aws:vpcflow    26192.00305                             -   aws:cloudwatchlogs:vpcflow 32.695269 windows     -     windows:fluentd     19939.02727                          -     windows                9713.832884                          -     WinEventLog:Security   8.928759
Hi ,   I have the logs written in the below manner 26/08/2024 10:27 method=are status=failed run_id_123 26/08/2024 10:28 method=are status=failed run_id_123 26/08/2024 10:29 method=are status=fa... See more...
Hi ,   I have the logs written in the below manner 26/08/2024 10:27 method=are status=failed run_id_123 26/08/2024 10:28 method=are status=failed run_id_123 26/08/2024 10:29 method=are status=failed run_id_123 26/08/2024 10:30 method=are status=completed run_id_123 failure_reason1 failure_reason_2 failure_reason_3 failure_reason_4     m trying to check the latest retry is completed or failed, if faile print the failure reason on the next 5 lines.   please help
Hi I am trying to create a case at your official support. Our user is "unionub". Required Input field "Select Entitlement" - a combo box - is always empty, and I cannot select anything. So I ... See more...
Hi I am trying to create a case at your official support. Our user is "unionub". Required Input field "Select Entitlement" - a combo box - is always empty, and I cannot select anything. So I cannot go on to create the case. Tried with both Firefox and Chrome please advise Altin Karaulli Security Officer Unionbank