All Topics

Top

All Topics

Hi Team, As per business requirement, need to get below details from same autosys batch and corresponding outputs to be displayed on the single row in a table: 1. Last execution time  2. Execution... See more...
Hi Team, As per business requirement, need to get below details from same autosys batch and corresponding outputs to be displayed on the single row in a table: 1. Last execution time  2. Execution time of specific search keyword i.e., Completed invokexPressionJob and obtained queue id :: 3. Number of times "ERROR" keyword present index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | stats latest(_time) as latest_time | convert ctime(latest_time) | append [search index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | search "Completed invokexPressionJob and obtained queue id ::" | stats latest(_time) as last_success_time | convert ctime(last_success_time)] | append [search index="<indexid>" Appid="<appid>" host IN (<host01>) source="<log_path01>" | rex field=_raw "\s(?P<level>[^\/]+)\s\[main\]" | stats count(level) by level | WHERE level IN ("ERROR")] | append [| makeresults | eval job_name="Print Job"] | table latest_time last_success_time count(level) job_name | stats list(*) as * Above query works fine. From query performance prospective, am I achieving the output right way? Is there any other better to achieve it? Because, similar set to query I need to apply to 10 other batch jobs inside the Splunk dashboard. Kindly suggest!!
Splunk apps are essential for maximizing the value of your Splunk Experience. Whether you’re using the default apps that come with Splunk, premium apps like Enterprise Security or IT Service Intellig... See more...
Splunk apps are essential for maximizing the value of your Splunk Experience. Whether you’re using the default apps that come with Splunk, premium apps like Enterprise Security or IT Service Intelligence, or our popular third-party apps on Splunkbase, the extensible Splunk Platform consistently delivers innovation at scale to enterprises worldwide.  To ensure the apps you rely on meet the highest quality standards, we’re introducing new governance rules to Splunkbase, our app marketplace. These guidelines are designed to give you confidence that the apps you download will seamlessly integrate with your environment, operate securely, and scale with your business needs.  Splunkbase has been a cornerstone of our ecosystem for over a decade, hosting over 3000 apps at its peak. While we prioritized growth of the marketplace and low-friction application listing and the Splunk Developer Community was eager to contribute to that, we've heard customer feedback that additional quality controls are needed to further enhance the reliability of apps and developer commitment to continued maintenance and upgrade. In response to insights from both our customers and the developer community, we're implementing several immediate steps to improve the quality of the apps on Splunkbase:  Early this year, we introduced and later enforced our first set of governance standards:  Apps must be compatible with at least 1 currently supported version of Splunk Apps must be updated once every 24 months.  These standards are designed to ensure that apps on Splunkbase are actively maintained by developers and do not fall into a state of neglect. After enforcing these guidelines, we archived approximately 900 apps from Splunkbase. Developers who wished to update and unarchive their apps were able to do so by adhering to the new standards and releasing updated versions. This is the first to uphold and improve Splunkbase app quality. Over the next year, we plan to progressively roll out additional governance standards to further enhance the experience h of finding, installing, and using apps on Splunkbase. Alongside these new standards, we are committed to providing developers with enhanced tooling that will make compliance with these rules as simple and seamless as possible.   Have questions? Please reach out to us on the #appdev Splunk Slack channel. Not a member yet? Join us on Slack today!
My apologies for such a noob question.  I literally got dropped into a Splunk environment and I know little to nothing about it. I have an index (foo as an example) and I'm told it's based on Oracle... See more...
My apologies for such a noob question.  I literally got dropped into a Splunk environment and I know little to nothing about it. I have an index (foo as an example) and I'm told it's based on Oracle audit logs.  However, the index was built for us by the Admin and all I get is blank looks when I asked what exactly is IN the index.  So my question is...how can I interrogate the index to find out what is in it? I ran across these commands : | metadata type=sourcetypes index="foo" | metadata type=hosts index="foo" This is a start, so now I have some sourcetype "keywords" (is that right?) and I can see some hosts.  But I suspect that's just the tip of the iceberg as it were given the index itself is pretty darn big. I'm an Oracle guy and if I wanted to get familiar w/ an Oracle structure I would start w/ looking at the table structures, note the fields in all the tables, get a diagram if one was available.  I don't have that option here.  I don't have the rights to "manage" the index or even create my own. So I have an index and no real clue as to what is in it...
Hello, I just upgraded my Splunk Enterprise from 9.2.1 to 9.2.2, and I saw that the OpenSSL used is in version 1.0.2zj. This version is vulnerable to the CVE-2024-5535 critical vulnerability. Is t... See more...
Hello, I just upgraded my Splunk Enterprise from 9.2.1 to 9.2.2, and I saw that the OpenSSL used is in version 1.0.2zj. This version is vulnerable to the CVE-2024-5535 critical vulnerability. Is there a future patch for Splunk Enterprise 9.2.x which upgrades the embedded OpenSSL ? Best regards, LAIRES Jordan
Good day,  I have a query to check my Entra logs to see what Conditional access policies gets hit. The returns results like this but I would like it to display only the policies that were success ... See more...
Good day,  I have a query to check my Entra logs to see what Conditional access policies gets hit. The returns results like this but I would like it to display only the policies that were success or Applied and not the ones that was not applied. CA CAName success failure failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined notApplied success failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined notApplied success success CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined What I want instead   success failure failure CA-Office-MFA   CA-Signin-LocationBased CA-HybridJoined success success CA-Signin-LocationBased CA-HybridJoined success failure CA-Signin-LocationBased CA-HybridJoined index=db_azure_entraid sourcetype="azure:monitor:aad" command="Sign-in activity" category=SignInLogs "properties.clientAppUsed"!=null NOT app="Windows Sign In" | spath "properties.appliedConditionalAccessPolicies{}.result" | search "properties.appliedConditionalAccessPolicies{}.result"=notApplied | rename "properties.appliedConditionalAccessPolicies{}.result" as CA | rename "properties.appliedConditionalAccessPolicies{}.displayName" as CAName | dedup CA | table CA CAName
We are developing a Splunk app that uses an authenticated external API. In order to support the Cloud Platform, we need to pass the manual check for the cloud tag, but the following error occurred, a... See more...
We are developing a Splunk app that uses an authenticated external API. In order to support the Cloud Platform, we need to pass the manual check for the cloud tag, but the following error occurred, and we couldn't pass.   ================ [ manual_check ] check_for_secret_disclosure - Check for passwords and secrets. details: [ FAILED ] key1 value is being passed in the url which gets exposed in the network. Kindly add sensitive data in the headers to make the network communications secure. ================   code: req = urllib.request.Request(f"https://api.docodoco.jp/v6/search?key1={self.apikeys['apikey1']}... req.add_header('Authorization', self.apikeys['apikey2'])   We understand that confidential information should not be transmitted via HTTP headers or POST and should not be included in URLs. Since "key1" is not confidential information, we believe there should be no issue with including it in the URL. Due to the external API's specifications, "key1" must always be included in the URL, so we are looking for a way to pass this manual check. For example, if there is a support desk, we would like to explain that there is no issue with the part flagged in the manual check. Does anyone know of such a support channel? Alternatively, if there is a way to provide additional information to reviewers conducting this manual review, we would like to know. (For example, adding comments to the source code, etc.)
Hi All, Am  trying to pantag search results  to a dynamic address group, but getting below error.Please support if anyone come across the same . External search command 'pantag' returned error code... See more...
Hi All, Am  trying to pantag search results  to a dynamic address group, but getting below error.Please support if anyone come across the same . External search command 'pantag' returned error code 2. Script output = "ERROR URLError: code: 401 reason: Key Expired: LUFR...dHc9 has expired. ".
When we are trying to run a report in deployment server to get the hosts that are reporting to Splunk, it is giving below error Unable to determine response format from HTTP Header Connection fail... See more...
When we are trying to run a report in deployment server to get the hosts that are reporting to Splunk, it is giving below error Unable to determine response format from HTTP Header Connection failed with Read Timeout The REST request on the endpoint URI /services/deployment/server/clients?count=0 returned HTTP 'status not OK': code=502, Read Timeout. Can anyone please suggest any work around.
Hi at all, I tried to use this visualization to display a process tree and it runs, but I have an issue: some leaves of the tree aren't displayed: I have only around 1,900 rows, so I haven't t... See more...
Hi at all, I tried to use this visualization to display a process tree and it runs, but I have an issue: some leaves of the tree aren't displayed: I have only around 1,900 rows, so I haven't the limit of 250,000 rows and neither the limit of 1,000 levels because I have max 5 levels. What could it be the issue? Thank you for your help. Ciao. Giuseppe
Hi  I found this 2011 chat "72798" on Splunk to "considering adding the concept of an "search head user account" on the indexer to allow the indexer administrator to restrict what the search head ca... See more...
Hi  I found this 2011 chat "72798" on Splunk to "considering adding the concept of an "search head user account" on the indexer to allow the indexer administrator to restrict what the search head can do" Do anybody know if this is somehow available or doable in 2024? My case is that I need a Standalone Searchhead with access to a subset of all indexes in the Cluster. But at the same time full control of the Searchhead (Splunk Admin capabilities except changes to the searchable index list)  The aim is to separate the SH to be managed by a 3 party.
Question 1: The last column is longer than the others, it's not aesthetic. I know I can adjust the height by editing "<option name="height">" this label, but the return data would be changed all t... See more...
Question 1: The last column is longer than the others, it's not aesthetic. I know I can adjust the height by editing "<option name="height">" this label, but the return data would be changed all the time. If I set "too high", it would seem weird. I want to solve two problem. 1. Don't show the "web scroll", meaning that can have a automatic size to accommodate my column data no matter how much data I have. 2. I want my every column averagely share the space. Question 2 : If my event exists too short, it seems like too small and narrow. It's also not aesthetic. Can I make a "minimum bar chart or a circle" defined by myself. (like what I draw on the image)
Hi  Can anyone please advice the search query to find out overall health status of VMware using metric log. index - vmware_metric SPL - | mstats avg("vsphere.usage") prestats=true WHERE "index"=... See more...
Hi  Can anyone please advice the search query to find out overall health status of VMware using metric log. index - vmware_metric SPL - | mstats avg("vsphere.usage") prestats=true WHERE "index"="vmware-metrics" AND "host"="system1.local" AND ("host"="system2" OR "uuid"="12457896) span=10s | timechart avg("vsphere.vm.cpu.usage") AS Avg span=10s | fields - _span*  
Hi all,  I am a bit of a newbie here, and am trying to setup HEC on splink cloud, however the URL I have created following the event collector documentation ( https://docs.splunk.com/Documentation/S... See more...
Hi all,  I am a bit of a newbie here, and am trying to setup HEC on splink cloud, however the URL I have created following the event collector documentation ( https://docs.splunk.com/Documentation/SplunkCloud/8.0.2007/Data/UsetheHTTPEventCollector)  doesn't appear to be working. Looking at the HEC dashboard occasionally there is some activity showing, but it tells me the URL is incorrect. I have tried numerous changes to the URL, and followed tons of advice on here, but nothing appears to be working. I am clearly missing something, and would really appreciate some guidance. https://http-inputs-myhostname.splunkcloud.com:443//services/collector/event/authorisationheader I have tried replacing event for raw, changed the port, although using a Splunk Cloud Platform instance rather then free trial. I have removed SSL and re-enabled. I would be very grateful of any advice and support here. Thank you    
Hello, I have a distributed Splunk architecture and I am trying to optimise/trim the received logs using Ingest actions features. However, I have the below error : - I tried to create new rule set ... See more...
Hello, I have a distributed Splunk architecture and I am trying to optimise/trim the received logs using Ingest actions features. However, I have the below error : - I tried to create new rule set on the Heavey forwarder and indexer , but it returned with the error message "this endpoint will reject all requests until pass4SymmKey has been properly set." So, I want to check where should I implement this feature on Indexer or HF? and is there any pre-request to implement it?
Hi, I would like to extract a field from a JSON logs which is in a prettier format already. I would like to extract a field named "clientTransactionId" from below sample data. { [-]    @timestamp:... See more...
Hi, I would like to extract a field from a JSON logs which is in a prettier format already. I would like to extract a field named "clientTransactionId" from below sample data. { [-]    @timestamp: 2024-09-05T10:59:34.826855417+10:00    appName: TestApp    environment: UAT    ivUser: Ashish    level: INFO    logger: com.app.login    message: New user state created - state_id: XXXX-YYYYYY, key_id: twoFactorAuth, key_value: {"tamSessionIndex":"1d1ad722-XXXX-11ef-8a2b-005056b70cf5","devicePrint":"DDDDDDDDDDD","createdAt":"2099-09-05T00:59:34.734404799Z","updatedAt":"2099-09-05T00:59:34.734404799Z","clientSessionId":"ppppppppppppp","sessionId":"WWWWWWWWW","clientTransactionId":"8fd2353d-d609-XXXX-52i6-2e1dc12359m4","transactionId":"9285-:f18c10db191:XXXXXXXX_TRX","twoFaResult":"CHALLENGE","newDevice":true,"newLocation":false,"overseas":true} with TTL: 46825    parentId:    spanId: 14223cXXXX6d63d5    tamSessionIndex: 1d1ad722-6b22-11ef-8a2b-XXXXXXX    thread: https-jsse-nio-XXXX-exec-6    traceId: 66d90275ecc565aa61XXXXXXXX02f5815 }
Hello members   i'm facing problems regarding parsing the event details on splunk i have forwarded the events from HF to indexers and now it's able to search but i'm facing issues with field extrac... See more...
Hello members   i'm facing problems regarding parsing the event details on splunk i have forwarded the events from HF to indexers and now it's able to search but i'm facing issues with field extractions and event details because the messages are truncated for example    if i have something like this sample event    CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDeviceGroup=/IDS categoryDeviceType=Forensic Investigation categoryObject=/Host   the categoryDeviceType parameter is truncated in field extraction so it display only forensic and other string is truncated   so can any one please help on this matter   my props.conf is   [trellix] category = Custom pulldown_type = 1 TIME_FORMAT = ^<\d+> EVAL-_time = strftime(_time, "%Y %b %d %H:%M:%S") TIME_PREFIX = %b %d %H:%M:%S
Hi, I have a requirement to perform end to search for troubleshooting in a dashboard. I am using multiple tokens inside the dashboard. Some tokens have a condition to be set or unset depending ... See more...
Hi, I have a requirement to perform end to search for troubleshooting in a dashboard. I am using multiple tokens inside the dashboard. Some tokens have a condition to be set or unset depending upon null values. However, if any of the tokens are not null, then I should concatenate the tokens and pass the combined token to the other sub searches. Note: There is always a token which is not null I tried but the other panels always say 'search is waiting for the input' Below is a sample snippet from the xml dashboard. <search><query>index=foo</query></search> <drilldown> <eval "combined">$token1$. .$token2$. .$token3$. .$token4$. $token5$</eval> <set "combined_token">$combined$</set> </drilldown> <panel> <search><query>index=abc $combined_token$</query></search> </panel>
I have installed free Splunk enterprise in my local system and It can be accessed via localhost:8000 I have also configured the webhook receiver in this instance to run at port 8088 via the HTTP eve... See more...
I have installed free Splunk enterprise in my local system and It can be accessed via localhost:8000 I have also configured the webhook receiver in this instance to run at port 8088 via the HTTP event collector settings I tried ngrok to expose localhost:8000 and localhost:8088 and use that public URL as a webhook listening server. But Splunk is not receiving any events. I can see my ngrok server being hit with the events but seems like it's not able to forward it over to splunk. what am I doing wrong here? What's the right way to expose my localhost Splunk instance to start receiving these webhook events? Thank you in advance for help! Webhooks Input #splunklocalhost
Hello Members,   i have data coming from HF indexed in indexer and i can search it the problem at the details of event    for example : event sample cs4=FIREEYE test when i see the details of th... See more...
Hello Members,   i have data coming from HF indexed in indexer and i can search it the problem at the details of event    for example : event sample cs4=FIREEYE test when i see the details of this event i see cs4=FIREEYE only first string other is truncated why?    
If I have two queries: 1. index=poc container_name=app horizontalId=orange outputs events with the trace ids 2. index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Quer... See more...
If I have two queries: 1. index=poc container_name=app horizontalId=orange outputs events with the trace ids 2. index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Query Service | rex field=_raw "execution time is[ ]+(?<latency>\d+)[ ]+ms" | stats p90(latency) as Latency outputs a Latency = 845 I want to link output of query 2 and query 1 via the trace ids for the P90 Latency.