All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search where I am comparing two indexes for a matching cell and I am trying to filter where Business = 1X... here's the SPL: index=csmp OR index=aws-business-map | eval BindleNew = case(s... See more...
I have a search where I am comparing two indexes for a matching cell and I am trying to filter where Business = 1X... here's the SPL: index=csmp OR index=aws-business-map | eval BindleNew = case(sourcetype="sim_csmp", AWSAccountName, sourcetype="csv", BindleName) | stats values(IssueUrl), values(AWSAccountName) as AWSAccountName, values(BindleName), values(Business) by BindleNew | search AWSAccountName!="" I am unsure where to put Business-="1X" clause.  Also, if we have more indexes like csmp that I am trying to compare to aws-business-map, how do we go about matching 4 indexes to aws-business-map?
Hello all, I have two lookups-- lookup1.csv with a "host" field and lookup2.csv with a "Host" field I want to see if any hosts match  Pretty silly, but IM blanking on this for some reason  ... See more...
Hello all, I have two lookups-- lookup1.csv with a "host" field and lookup2.csv with a "Host" field I want to see if any hosts match  Pretty silly, but IM blanking on this for some reason  here is how I was doing it, but it doesn't seem to find the hit (even when I add it in a matching host purposefully for testing) | inputlookup lookup1.csv | rex field=host "(?<host>[^.]+)\." | dedup host | appendpipe [ | inputlookup lookup2.csv ] | table host Host | eval results = if(match(upper(Host),upper(host)), "hit", "miss") | table host Host results
How should I fetch those events "started" that are present in the Splunk log and those events "completed" will be not available? ... | transaction build_number,type startswith="started" ..... what ... See more...
How should I fetch those events "started" that are present in the Splunk log and those events "completed" will be not available? ... | transaction build_number,type startswith="started" ..... what needs to be wrote here  
Hello everyone, I'm introducing in splunk, and I have a question about Timecharts, it is possible to create a timechart per every single device? For example, I created a timechart with this: | ... See more...
Hello everyone, I'm introducing in splunk, and I have a question about Timecharts, it is possible to create a timechart per every single device? For example, I created a timechart with this: | datamodel release_management_info flat | search Location=* AND Model=* | timechart span=month distinct_count(Name) by Category But this doesn't cover all devices as some of them haven't created events in a few months.
I have sophos and sonicwall firewall in my network and installed splunk for log gathering. Then I configured sophos in splunk, collecting all logs from sophos. But showing no data in sophos dashboard... See more...
I have sophos and sonicwall firewall in my network and installed splunk for log gathering. Then I configured sophos in splunk, collecting all logs from sophos. But showing no data in sophos dashboard.   Please guide me to get sophos all alerts in dash
My search is supposed to return some data with double quotes on it but when I use the TABLE function the results displayed are encoded. Example: Search: index=* "america" "PurchaseOrgE" |rex ... See more...
My search is supposed to return some data with double quotes on it but when I use the TABLE function the results displayed are encoded. Example: Search: index=* "america" "PurchaseOrgE" |rex "Query being run \[(?<Query>(.*?)(?=\]))" |eval ParsedQuery = replace(Query," AND "," AND ") |table ParsedQuery Expected result: SELECT M2HTab1."SupplierKey.UniqueName" AS "SupplierKey_UniqueName",M2HTab1."Name"  Current result: SELECT M2HTab1.&quot;SupplierKey.UniqueName&quot; AS &quot;SupplierKey_UniqueName&quot;,M2HTab1.&quot;Name&quot;    As you can see in the examples the double quotes are encoded making it hard to properly see the results. Is there a way to prevent this from happening?  Thanks in advance.          
Hi @richgalloway, This is with respect to your solution posted in https://community.splunk.com/t5/Splunk-Search/Searchquery-error/m-p/509508. Since the thread is of 2020 and it is marked as resolved... See more...
Hi @richgalloway, This is with respect to your solution posted in https://community.splunk.com/t5/Splunk-Search/Searchquery-error/m-p/509508. Since the thread is of 2020 and it is marked as resolved, I have created this new thread. The issue is about error message observed in Splunk index=_internal: - Failed to read size=1 event(s) from rawdata in bucket Rawdata may be corrupt, see search.log. Results may be incomplete! You shared if bucket prefix is "rb_", it is a replicated bucket and thus, we should stop the indexer, delete the bucket, then restart the indexer.  The cluster master will create a new replicate bucket. I need your inputs when prefix is: "db_", what does it stand for and what all actions to take for it? Secondly, I also observed bucket prefix: - "hot_v1".  Thus, would want to know what it stands for and what all actions to take for it? Thirdly, you stated the specific file may be corrupt. I need your inputs on below: - 1. How do I find if the file became corrupt or if the reason is different? 2. How do I find the details of the file if it got corrupt such as: - 2.1 From which forwarder the data was sent? 2.2 At what timestamp did the file become corrupt? Thank you
Hi All, There is a requirement where a temporary cluster has to trigger Splunk API to run a command and generate a report. This cluster will have to trigger Splunk API right before it gets termina... See more...
Hi All, There is a requirement where a temporary cluster has to trigger Splunk API to run a command and generate a report. This cluster will have to trigger Splunk API right before it gets terminated. However, the Splunk API after called should only run 3 hours after its triggered and once the report is generated, the report should be sent via email and the process should be ended(Only once and not recurring every 3 hours).  This is because there is one log as part of that command which gets pushed 3 hours after the temporary cluster is terminated.(The data is not real-time).  I only have this option to automate because the call should be made by a cluster which is ready for termination only.  So how can I schedule a search to run  3 hours after its triggered to generate an report as an email?  Please let me know if there are any better options to achieve this. Much awaiting for the suggestions. Thanks in Advance. 
Hello, I have a question regarding the TA-Exchnage-Mailbox in splunk app for microsoft exchange. I am using this app on my deployment server to parse the exchange logs but the logs are not parsed... See more...
Hello, I have a question regarding the TA-Exchnage-Mailbox in splunk app for microsoft exchange. I am using this app on my deployment server to parse the exchange logs but the logs are not parsed on the search head. I copied the default conf files to the local one and I made the changes to receive the logs but they are not parsed especially for the message tracking ones. Any idea on how to configure it?   Thank you in advance!
Hi , We are discontinuing Opsgenie as the alerting tool. I want to understand if I can disable it globally from all the alerts by disabling it from "Alert Actions" section or is there any other w... See more...
Hi , We are discontinuing Opsgenie as the alerting tool. I want to understand if I can disable it globally from all the alerts by disabling it from "Alert Actions" section or is there any other way to do it. Thanks, Vaibhav
Dear Splunk team, regarding the mentioned blog entry -- does the UF support sending to multiple destinations ("Data Cloning") or is the paid version -- HF -- required?   Thanks in advance for y... See more...
Dear Splunk team, regarding the mentioned blog entry -- does the UF support sending to multiple destinations ("Data Cloning") or is the paid version -- HF -- required?   Thanks in advance for your effort, Frank
Hello All, I am searching for corrupt data in Splunk, and thus executed the below query: -       index=_internal sourcetype=splunk_search_messages "corrupt" OR "corrupted"       I g... See more...
Hello All, I am searching for corrupt data in Splunk, and thus executed the below query: -       index=_internal sourcetype=splunk_search_messages "corrupt" OR "corrupted"       I got the below errors: - Error 1: - message=[wd*****] [subsearch]: Failed to read size=4 event(s) from rawdata in bucket='dummyIndex~119~8X88XXY-X8XX-88X8-888X-X88X88XX8888' path='/u01/ovz/data/dummyIndex/db/db_1611661166_1611222333_111_8X88XX8X-X8XX-88X8-888X-X88X88XX8888. Rawdata may be corrupt, see search.log.  Results may be incomplete! Error 2: - 01-01-2023 07:01:01.098 ERROR SRSSerializer [12729 RemoteTimelineReadThread] - cannot read file magic -probably corrupt Error 3: -  Error decompressing zstd block: Corrupted block detected I need your help to understand and get details about the above errors, Error 1, Error 2 and Error 3. In Error 1, it states to check search.log. Thus, it would be helpful if you can share how to fetch the relevant information from the file.  So far for error 1, I found https://community.splunk.com/t5/Splunk-Search/Searchquery-error/m-p/509508 which states that file may be corrupt. Any information about the above two errors will be very helpful. Thank you
Hi, I have a spl query which identifies users on a particular criteria. I want to notify them by sending an email directly from Splunk. How can I do this i.e.,  where in splunk picks the email ... See more...
Hi, I have a spl query which identifies users on a particular criteria. I want to notify them by sending an email directly from Splunk. How can I do this i.e.,  where in splunk picks the email address from search results and sends an email and how can i mention the email body in splunk with links.
I am using https://localhost:8089/services/authentication/users?output_mode=json  for getting users from splunk. But i see only one user. Under admin user view there are more. User that receives one ... See more...
I am using https://localhost:8089/services/authentication/users?output_mode=json  for getting users from splunk. But i see only one user. Under admin user view there are more. User that receives one user via API has admin role. Why the rest of users are not returned?
Hello, I have table below I want to expand the ERRORS row without expanding  names column  names errors B 3 4 5 C 1 3 D 3 4 5 E 1 5 I want the o/p to be in this form ... See more...
Hello, I have table below I want to expand the ERRORS row without expanding  names column  names errors B 3 4 5 C 1 3 D 3 4 5 E 1 5 I want the o/p to be in this form names errors B 3   4   5 c 1       Thank You! Happy Splunking!
How to start splunk Enterprise (8.x) on linux server? i.e. start command 
Hi All, I want to export all the health rules configured for multiple applications."Is there a way to export the list of health rules in AppDynamics without requiring admin rights? The method ment... See more...
Hi All, I want to export all the health rules configured for multiple applications."Is there a way to export the list of health rules in AppDynamics without requiring admin rights? The method mentioned in the document page require certain special rights which i dont have. https://docs.appdynamics.com/appd/23.x/latest/en/extend-appdynamics/appdynamics-apis/alert-and-respond-api/health-rule-api#id-.HealthRuleAPIv23.1-RetrieveaListofHealthRulesforanApplication GET <controller_url>/controller/alerting/rest/v1/applications/<application_id>/health-rules I'm trying to gather some data for analysis, but I don't have access to the admin console. Any help or guidance would be appreciated!"
Morning all,   I have a Powershell 2 script that sends an email to people when my alert is triggered. I can't use an the email action as my network doesn't have internet connection. When the aler... See more...
Morning all,   I have a Powershell 2 script that sends an email to people when my alert is triggered. I can't use an the email action as my network doesn't have internet connection. When the alert is triggered, it's not automatically running the script. The script works, as I've ran it independently from Splunk and the email has been sent. Is the format wrong, or am I missing something? Thanks in advance!
My jobs are showing 0 events. how to resolve?
Both are in different accounts.   Can universal forwarder help?