All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I created a summary index to call it in dashboard because it has so much data and need to run for larger time frames. Configured summary index in this way - <my search query> ---- ---- ---- | eval ... See more...
I created a summary index to call it in dashboard because it has so much data and need to run for larger time frames. Configured summary index in this way - <my search query> ---- ---- ---- | eval log_datetime=strftime(_time, "%Y-%m-%d %H:%M:%S") | rename log_datetime AS "Time (UTC)" |table _time, "Time (UTC)", <wanted fields> | collect index=sony_summary Now calling it in one of my dashboard panel in this way -  index=sony_summary sourcetype=stash |search <passed drop-down tokens> |sort 0 -"Time (UTC)" | table "Support ID","Time (UTC)", _time --------  Now my requirement is I don't want users to see this summary index data. So I have created a drilldown and linked to different search as below. Whenever they click on any field value in table, new search will be opened with clicked support_id   <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <!-- Drilldown Configuration --> <!-- Enable row-level drilldown --> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_blank">/app/search/search?q=search index=sony* sourcetype=sony_logs support_id="$click.value$"&amp;earliest=$time_range.earliest$&amp;latest=$time_range.latest</link> </drilldown>   Now when I click on dashboard panel's field, it is opening with expected support_id as expected, but it is opening with token time range. I am expecting that this should return the particular time range at what time event indexed as per Time (UTC) or _time. Example - An event has support ID with time 07:00 am, when I click on it it should open for 7 am, but it is taking token time range. When I checked in chatgpt, it given in following one and modified it in this way. <table id="myTable"> <search> <query>index=sony_summary sourcetype=stash |search <passed drop-down tokens> |sort 0 -"Time (UTC)" |eval epoch_time=_time, epoch_plus60=_time+60 (added this now) | table "Support ID","Time (UTC)", _time -------- , epoch_time, epoch_plus60</query> </search> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search> <!-- Drilldown Configuration --> <!-- Enable row-level drilldown --> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_blank">/app/search/search?q=search index=sony* sourcetype=sony_logs support_id="$click.value$"&amp;earliest=$row.epoch_time$&amp;latest=$row.epoch_plus60</link> </drilldown> Now this is working fine and time range is also coming what I clicked on. but here the issue is I don't want these two new fields - epoch_time, epoch_plus60 to be visible in dashboard. These should get hided completely but still drilldown should work as expected. What to do here? Please suggest me. Am I missing anything? Even if I keep those fields in the last in panel, still my manager said hide it but it should work as expected.
Hi everyone, I'm working on a dashboard that's comparing two different applications. One of the tables has their performance throught different metrics side-by-side, as such: "Avg Time App1" | ... See more...
Hi everyone, I'm working on a dashboard that's comparing two different applications. One of the tables has their performance throught different metrics side-by-side, as such: "Avg Time App1" | "Avg Time App2" | "Max Time App1" | "Max Time App2" | ... Additionally, each row of the table represents a different date, so my team and I can check their performance through an arbitrary time interval.  My idea was to color a certain cell based on its value compared to the equivalent value of the other app. So, for example, let's say "Avg Time App1" = 5.0 and "Avg Time App2" = 8.0 on day X (an arbitrary row). My idea is to highlight the cell for the "Avg Time App2" on day X as its value is bigger than for App1.  I'm aware I can color cells dinamically with the `<format>` block, by setting `type="color"` and the `field` to whatever my field is. But I wanted to know how I can do this by each row (this means that even if the cell on the first row of column X is highlighted, the next rows won't necessarily be) and based on a comparison with another cell, from another column, on the same row.  One other detail is that the name of my columns contains a token. So a somewhat related problem I've been having is accessing the value from the cells, because, to my understanding, it would turn out as something of the sort: $row."Avg Time $app1$"$ So if someone could help me implement this conditional coloring idea, I would be very grateful. Thanks in advance,  Pedro
Hi Team, I have been getting a skipped search notification in my CMC overview under Health from quite some time. It is a scheduled report Search name: ESS - Notable Events Cron: every 5 mins ( ... See more...
Hi Team, I have been getting a skipped search notification in my CMC overview under Health from quite some time. It is a scheduled report Search name: ESS - Notable Events Cron: every 5 mins ( 1-59/5 * * * *) Timerange:  earliest - 48d@d  ; latest - +0s (now) Message: The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached Search query:  `notable` | search NOT `suppression` | eval timeDiff_type=case(_time>=relative_time(now(), "-24h@h"),"current", 1=1, "historical") | expandtoken rule_title | table _time,event_id,security_domain,urgency,rule_name,rule_title,src,dest,src_user,user,dvc,status,status_group,owner,timeDiff_type,governance,control | outputlookup es_notable_events | stats count It is writing the output to an output-lookup.  and takes around 8 mins as runtime when checked under job management. Can some help me understand where the issue lies, what's making this search in particular to skip. The percentage skipped it around 50% and the status is critical.  
We are building an iOS app that using URLSession for making network traffics in our app. But AppDynamics does not collect any traffics which built with async/await. For the traffics that use the trad... See more...
We are building an iOS app that using URLSession for making network traffics in our app. But AppDynamics does not collect any traffics which built with async/await. For the traffics that use the traditional ways (completion handler) the AppD still collect properly. AppDynamics 
Hello Community, when we try to open a link to a Splunk Url without language setting, e.g. via the "Show results"-link in an email alert like  https://our-splunk-address/app/some-app/alert?s=some-... See more...
Hello Community, when we try to open a link to a Splunk Url without language setting, e.g. via the "Show results"-link in an email alert like  https://our-splunk-address/app/some-app/alert?s=some-alert the request gets redirected automatically to something like https://our-splunk-address/de/app/some-app/alert?s=some-alert which does not work. The Url should be https://our-splunk-address/de-DE/app/some-app/alert?s=some-alert (see Configure user language and locale | Splunk Docs) This incorrect redirect only happens in our productive environment  and only if the language setting of the browser is set to german. English works fine (redirect is .../en-GB/...) We tested different browsers (Edge, Firefox) with same results. Our test environment uses the same browsers, redirects correctly and we can't fathom any configurations differences between our test and production that could explain this bevaviour. Did you experience a similar phenomenen or can give me a hint where I can look for further clues? Regards, Jens
I am trying to remove a field which  has a suffix of sophos_event_input after the username. Example Username_Field Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would l... See more...
I am trying to remove a field which  has a suffix of sophos_event_input after the username. Example Username_Field Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would like to change the Username field to only contain the users name, Example Username_Field Joe-Smith, Adams  Jane-Doe, Smith  Basically I want to get rid of the sophos_event_input suffix. How will I go about this? 
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled ... See more...
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled to secondary storage. But this time i apply this modification again but it not rolled until one week. I also already crosscheck on the DB created on the index created and it still above the limit i set, i also check if the config already distributed. Anyone faced this scenario?  #splunk
I don't understand why the legacy 'run a script' alert action has been deprecated.  The official guidelines to create a 'Custom Alert Action' are to complicated to follow. I attempted to find a guid... See more...
I don't understand why the legacy 'run a script' alert action has been deprecated.  The official guidelines to create a 'Custom Alert Action' are to complicated to follow. I attempted to find a guide from Google, but there are too many conflicting methods, and I consistently failed to implement them. I just want a simple and straightforward guide to create a 'Custom Alert Action'  that runs a batch file (script.bat) or a PowerShell script file (script.ps1) when the alert is triggered.  Or just create a 'custom alert action' that exactly do the same thing as the deprecated 'run a script' alert action. (Just type the batch file name and that's it)   Environment: Splunk Enterprise 9.1 (Windows)  
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, th... See more...
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, the Forwarder Management (Deployment Server) functionality is no longer working as expected. Despite multiple troubleshooting attempts, the issue persists. I have attached a screenshot showing the specific error encountered. I would greatly appreciate your guidance or recommendations to help resolve this matter. Please let me know if any additional logs or configuration details are needed.     Thank you in advance for your support.
I am using the Cisco Security Cloud integration in order to try and import my Duo logs into splunk enterprise (on prem). Following a plethora of directions, including Duo Splunk Connector guide I sti... See more...
I am using the Cisco Security Cloud integration in order to try and import my Duo logs into splunk enterprise (on prem). Following a plethora of directions, including Duo Splunk Connector guide I still cannot get it to work. No data goes through and it stays in a "Not Connected" status.  So far, I have verified that: - Admin API token has correct permissions - Integration is configured with correct admin api info like secret key, integration key, api hostname, etc.  - I am using the newest version of this app: Cisco Security Cloud    Does anyone have any tips for helping troubleshoot this issue? I cannot seem to find any logs or anything to even get a more advanced error code than "Not Connected" when I am pretty sure it should be working. 
Hello, I start splunk 9.4.3 as a docker container from the image registry.hub.docker.com/splunk/splunk:latest. However, it terminates after approx. 60 seconds with the message: TASK [splunk_standa... See more...
Hello, I start splunk 9.4.3 as a docker container from the image registry.hub.docker.com/splunk/splunk:latest. However, it terminates after approx. 60 seconds with the message: TASK [splunk_standalone : Get existing HEC token] ****************************** fatal: [localhost]: FAILED! => { "changed": false } MSG: GET/services/data/inputs/http/splunk_hec_token?output_mode=jsonadmin********8089NoneNoneNone[200, 404];; AND excep_str: URL: https://127.0.0.1:8089/services/data/inputs/http/splunk_hec_token?output_mode=json; data: None, exception: API call for https://127.0.0.1:8089/services/data/inputs/http/splunk_hec_token?output_mode=json and data as None failed with status code 401: {"messages":[{"type": "ERROR", "text": "Unauthorised"}]}, failed with status code 401: {"messages":[{"type": "ERROR", "text": "Unauthorised"}]} PLAY RECAP ********************************************************************* localhost : ok=69 changed=3 unreachable=0 failed=1 skipped=69 rescued=0 ignored=0 If I start the container with "sleep infinity" and then exec into the container I can start splunk with "splunk start" and splunk works perfectly. Can anyone tell me what the problem is?
I am having issues trying to outputlookup to a new empty KV Store lookup table I made. When I try to run the following search, I get this error:  Error in 'outputlookup' command: Lookup failed becau... See more...
I am having issues trying to outputlookup to a new empty KV Store lookup table I made. When I try to run the following search, I get this error:  Error in 'outputlookup' command: Lookup failed because collection '<collection>' in app 'SplunkEnterpriseSecuritySuite' does not exist, or user '<username>' does not have read access. | makeresults | eval <field_1>="test" | eval <field_2>="test" | eval <field_3>="test" | eval <field_4>="test" | fields - _time | outputlookup <collection> I redacted the actual data I am using, but it is formatted the same way as above. My KV Store file has global sharing and everyone can read/write, for testing purposes. What is wrong here and what can I do to fix this?
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. ... See more...
Hello Splunk Community, I’m reaching out for guidance on handling Knowledge Objects (KOs) that reside in the default directory of their respective apps and cannot be deleted from the Splunk UI. We observed that: • Some KOs throw the message: “This saved search failed to handle removal request” which, as documented, is likely because the KO is defined in both the local and default directories. I have a couple of questions: 1. Can default directory KOs be deleted manually via the filesystem or another method, if not possible through the UI? 2. Is there a safe alternative such as disabling them if deletion is not possible? 3. From a list of KOs I have, how can I programmatically identify which ones reside in the default directory? Also, is there a recommended way to handle overlapping configurations between default and local directories, especially when clean-up or access revocation is needed? Any best practices, scripts, or documentation references would be greatly appreciated!
Hi, I upgraded Splunk Enterprise from 9.2.3 to 9.4.3, and the KVSotre status is failed. It was migrated successfully to 7.0.14 on one server automatically, but on the second server, migration did... See more...
Hi, I upgraded Splunk Enterprise from 9.2.3 to 9.4.3, and the KVSotre status is failed. It was migrated successfully to 7.0.14 on one server automatically, but on the second server, migration did not start upon upgrade.  Is there a solution to restore the KVstore status and migrate to  7.0.14? It is a standalone server and not part of clustered environment. Some servers also have KVStore status Failed on the version 9.2.3, and I want to change the status before starting to upgrade them to 9.4.3 This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: 127.0.0.1:8191] guid : xzy port : 8191 standalone : 1 status : failed storageEngine : wiredTiger versionUpgradeInProgress : 0
Hi Everyone, Please help me regarding this ask - i need the splunk to show the respective events with the current date instead of the date when the file being placed in the host. For instance, like ... See more...
Hi Everyone, Please help me regarding this ask - i need the splunk to show the respective events with the current date instead of the date when the file being placed in the host. For instance, like the file been placed in server dated 17th july and the events are showing with date 17th july instead i want with the current date.  If the current date 22nd July, then event's date should mentioned as 22nd July and likewise. I have tried with DATETIME_CONFIG = CURRENT and DATETIME_CONFIG = NONE in props.conf but it doesn't work.  
I have a scheduled export report for daily 11PM from my monitoring dashboard. we are in EST time zone and my dashboard is providing the data as expected. But the report pdf in the mail is having data... See more...
I have a scheduled export report for daily 11PM from my monitoring dashboard. we are in EST time zone and my dashboard is providing the data as expected. But the report pdf in the mail is having data based on 'GMT' time zone and its not matching with my dashboard numbers.  For Ex: Expected Time frame is : 00:00 and 23:00 on 7/22 Getting GMT timeframe: 00:00 and 03:00 on 7/23 How to fix this? thanks in advance. 
Hi all, I'm having some issues excluding events from our Juniper SRX logs. These events are ingested directly on our Windows Splunk Heavy Forwarders, since these two firewalls are the only syslog in... See more...
Hi all, I'm having some issues excluding events from our Juniper SRX logs. These events are ingested directly on our Windows Splunk Heavy Forwarders, since these two firewalls are the only syslog inputs we have. My current config is as follows; inputs.conf [udp://firewallip:port] connection_host = ip disabled = false index = juniper sourcetype = juniper props.conf [udp://firewallip:port] TRANSFORMS-null=TenantToTrust,TrustToTenant force_local_processing = true transforms.conf [TenantToTrust] REGEX = source-zone-name="tenant".*destination-zone-name="trust" DEST_KEY = queue FORMAT = nullQueue [TrustToTenant] REGEX = source-zone-name="trust".*destination-zone-name="tenant" DEST_KEY = queue FORMAT = nullQueue All we'd like to do is exclude any events where the source and destination zones are both tenant or trust. Any idea where I might be going wrong? Thanks.
Hello everyone, I am using Splunk Studio to create a dashboard with two tabs. Enterprise version 9.4.1. Both tabs are visually identical but in tab 1, I am quering summarized indexes whereas for t... See more...
Hello everyone, I am using Splunk Studio to create a dashboard with two tabs. Enterprise version 9.4.1. Both tabs are visually identical but in tab 1, I am quering summarized indexes whereas for the second tab, I am running normal queries. 'Normal' queries in this tab can be very intensive if a long time range is selected, therefore, I am trying to limit the time selection to a maximum range of two hours. It could be in any day but the duration between start and end time should not exceed 2 hours. (Not latest 2hours) I've tried editing XML by following some AI suggestions. Most suggestions relied on changing the query itself but this was breaking the query and returning no results in the end. Wondering if someone has already any insights how to do this or could guide me in the right direction? Visually it would look like this:
Hello All, Can anyone tell me how i can access the splunk SOAR for Cloud?
Hello. I'm actually using a parallelIngestionPipelines = 2 feature on my Indexers. Works. Servers (Linux) are professional, with 24CPU and 48GB RAM.   I'm wondering, someone had ever tried a p... See more...
Hello. I'm actually using a parallelIngestionPipelines = 2 feature on my Indexers. Works. Servers (Linux) are professional, with 24CPU and 48GB RAM.   I'm wondering, someone had ever tried a parallelIngestionPipelines = 4 on his Indexers? Works? Crashes?   Thanks.