All Topics

Top

All Topics

Hey everyone, I'm stumped trying to put together a query to find specific hosts that return some value but not some other possible values over a given timeframe, where each result is itself a separat... See more...
Hey everyone, I'm stumped trying to put together a query to find specific hosts that return some value but not some other possible values over a given timeframe, where each result is itself a separate log entry (and each device returns multiple results each time it does an operation). E.g., given a list of possible results, the data itself looks something like this:     (results from today:) hostname=x result=2 hostname=x result=3 hostname=y result=1 hostname=z result=1 (results from yesterday/previous days:) hostname=x result=1 hostname=y result=1 hostname=z result=1     and I need to find all hostnames that had a result of "1" but also not results "2" or "3" over some given timeframe. So, from the data above, I'd be looking to return hostnames "y" and "z", but not "x". Unfortunately, the timeframe would be weeks, and would be looking at many thousands of possible hostnames. The only data point I'd know ahead of time would be the list of possible results (it'd only be a handful of possibilities, but a device can potentially return some/all of them at once). Any advice on where to start? Thanks!
Hi Using following query: `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") ... See more...
Hi Using following query: `mbp_ocp4` kubernetes.container.name =*service* level=NG_SERVICE_PERFORMANCE SERVICE!=DPTDRetrieveArrangementDetail* | eval resp_time_exceeded = if(EXETIME>3000, "1","0") |bin span=30m _time bins=2 | stats count as "total_requests", sum(resp_time_exceeded) as long_calls by kubernetes.namespace.name, kubernetes.container.name | eval Percent_Exceeded = (long_calls/total_requests)*100 | where total_requests>200 and Percent_Exceeded>5   Getting results as shown below: I use the following IN THE CODE ABOVE |bin span=30m _time bins=2 BUT NOT GETTING so that the data is shown in 30 minutes increments? How can I refine the query so that it shows 30 minute increments instead of all  at once?
My organization has a handful of heavy forwarders that were configured to listen to syslog sources through udp://514. This was set up by a 3rd party, and now we are trying to understand the configura... See more...
My organization has a handful of heavy forwarders that were configured to listen to syslog sources through udp://514. This was set up by a 3rd party, and now we are trying to understand the configuration. Searching the heavy forwarders' /etc/* recursively for "514", "tcp", "udp", "syslog", or "SC4S" returns no relevant results. We know syslog is working, because we have multiple sources that are pointed at the heavy forwarders using udp over port 514 and their data is being indexed. Curiously, when a new syslog source is pointed at the HFs, a new index with a random name pops up in our LastChanceIndex. We have no idea how any of this is configured - the index selection, or the syslog listener. We usually create an index that matches the name given, since we've never been able to find the config to set it manually. Any suggestions on how syslog might be set up, or what else I could try searching for?
Hi, I'm trying to run this below sample code to send msg to Splunk, but getting error - Host not found. Am I doing right? I'm able to ping the splunk server (171.134.154.114) from my dev Linux ser... See more...
Hi, I'm trying to run this below sample code to send msg to Splunk, but getting error - Host not found. Am I doing right? I'm able to ping the splunk server (171.134.154.114) from my dev Linux server. However, I'm able to use Curl command successfully and able to see my msg in Splunk dashboard. doc/html/boost_asio/example/http/client/sync_client.cpp - 1.47.0 ./sync_client 171.134.154.114 /services/collector arg[1]:171.134.154.114 Exception: resolve: Host not found (authoritative) [asio.netdb:1]  
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity... See more...
I am having trouble to create propper drilldown action between two Custom ITSI Entity Dashboards. They both work fine, when called by clicking the Entity Name in the Service Analyzer. The two Entity Dashboards show data from two custom entity types with some relation to each other. I want to create a navigation between the two Dashboards. I did create a normal drilldown action to call the related Dashboard. This works somehow, but the Token is not handled correctly. for example I defined Token Parameters: host = $click.value2$ and in the target dashboard I see |search host=$click.value2$ instead of the real value that should have been handed over in the token. When I use the Dashboards outside of ITSI, the drilldown action works fine. Looks to me that in ITSI some scripts are used and the handover is not directly to the other Entity Dashboard, but somehow through the Entity (_key) and the defined entity type. Great if somebody could shed some insights on that!
Hello, I have Splunk distributed deployment (cca 20 servers + cca 100 UFs). On servers, I configured SSL encryption of management traffic and TLS certificate host name validation: server.conf   [... See more...
Hello, I have Splunk distributed deployment (cca 20 servers + cca 100 UFs). On servers, I configured SSL encryption of management traffic and TLS certificate host name validation: server.conf   [sslConfig] enableSplunkdSSL = true serverCert = <path_to_the_server_certificate> sslVerifyServerCert = true sslVerifyServerName = true sslRootCAPath = <path_to_the_CA_certificate>   Everything is working well - servers communicate each other. But my question is: I use Deployment server for pushing config to UFs and I am little bit surprised that management traffic between UFs and Deployment server is still flowing (I see all UFs phoning home, I can push config) even I did not configure encryption nor hostname validation on any UF. Is it OK? Does it mean that hostname validation for management traffic cannot be configured on UF? Or there is a way how to config hostname validation on UFs? I found only how to configure hostname validation on UF in outputs.conf for sending collected data to Indexer, but nothing about management traffic. Thank you for any hint. Best regards Lukas Mecir
I have a query that returns 2 values . . . | stats max(gb) as GB by metric_name metric_name GB storage_current 99 storage_limit 100   Now I want to be able to reference the current... See more...
I have a query that returns 2 values . . . | stats max(gb) as GB by metric_name metric_name GB storage_current 99 storage_limit 100   Now I want to be able to reference the current and limit values in a radial gauge, how can I covert that table into key value pairs so I can say that the value of the radial is "storage_current"? something like  |eval {metric_name}={GB}
When setting up for a multiple environment adrum injection in an Angular application, it is best to follow a few patterns to keep things clean and easy to debug. This approach will reduce any unneces... See more...
When setting up for a multiple environment adrum injection in an Angular application, it is best to follow a few patterns to keep things clean and easy to debug. This approach will reduce any unnecessary complexity and make any issues easier to debug and track down. In the following instructions, you can expect to find the brief steps needed to create Adrum scripts for different environments to help you manage your Angular applications effectively across various environments.  In this article... How do I configure my adrum scripts for my Angular environment? Create index files for each environment-specific injection script  Environment-specific adrum scripts for dynamic injection at build time  Apply configurations in angular JSON in anticipation of environment build How do I configure Adrum scripts for my Angular environment?  We want to create any adrum scripts, tailored for respective environments, within the  /assets/scripts directory. This will enable Angular to permit the local injection of these scripts into a separate file. This script is quite simple to begin with, and you can add to it for added app complexity and tracking needs. The screenshot below shows an example of this AppDConfig.js file that will be used for each environment that needs tracking with a different logic or appkey. Example injection script 1. Create index files for each environment-specific injection script  We will next want to create an index.html file for each environment for which we have a different injection script, as seen in this screenshot, below. We have three index files: One default index.html that has no special logic, no adrum script, and will be used unless we tell the angular.json file to swap the index file with a different one. This is typically used for local development, so we don't report local development to the AppDynamics Controller. We can also see there is an index.prod.html and index.uat.html (you can have as many as you need for specific logic in respective environments). 2. Environment-specific adrum scripts for dynamic injection at build time  In The 3 index files corresponding to the environment-specific injection script the same screenshot to the left, we can see there are three scripts, AppDConfig.js , AppDConfig.prod.js , and AppDConfig.uat.js , that will be imported into the respective index files to dynamically inject the correct script at build time into the correct environment.  In the screenshot below, we can see the index.prod.html file which is using the AppDConfig.prod.js script. This index file will be dynamically replaced on the build when the builder for production is run. ng build -c production This index.prod.html index file (using the AppDConfig.prod.js script) will be dynamically replaced on the build when the production builder is run Likewise below is the screenshot of the index.uat.html that will pull in the AppDConfig.uat.js file when UAT is built. ng build -c uat This index.uat.html script will pull in the AppDConfig.uat.js file when UAT is built.  3. Apply configurations in angular.json in anticipation of the environment build  Next, we will want to apply configurations in the angular.json file so that when we build the respective environments, these new index files are swapped out with the default and replaced with the environment-specific index.html file.  Angular.json index.html replacement configuration example. You can see that, in the production configuration, we have added:  "index": {   "input": "src/index.prod.html",   "output": "index.html" }, And likewise, in the UAT configuration we have added: "index": {   "input": "src/index.uat.html",   "output": "index.html" }, This will tell Angular to swap these out at build time only for these environments/configurations. All other configurations, without this special setup, will just use the default index.html file, which has no AppDynamics injection script. 
Hi Fellow Splunkers, So a more general question: what are the best practices for upgrades around security patching and deployment in a distributed production environment. We have SHs and ndexers cl... See more...
Hi Fellow Splunkers, So a more general question: what are the best practices for upgrades around security patching and deployment in a distributed production environment. We have SHs and ndexers clustered. I can clarify if this is unclear. Appreciate any advice and shared experiences.
Hello Everyone,  We are trying to restore the DDSS data stored in S3 bucket to our Splunk Enterprise. We follow the step : https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/DataSel... See more...
Hello Everyone,  We are trying to restore the DDSS data stored in S3 bucket to our Splunk Enterprise. We follow the step : https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/DataSelfStorage?_gl=1*gl0ykt*_ga*MTYyNzI0MDcwMC4xNzA1NDc1ODQ4*_ga_GS7YF8S63Y*MTcwNjAxMjQxMi4yLjEuMTcwNjAxMjUwNC42MC4wLjA.*_ga_5EPM2P39FV*MTcwNjAxMDg1NS4zLjEuMTcwNjAxMjUwNC4wLjAuMA..&_ga=2.175504743.433835015.1705989189-1627240700.1705475848#Restore_indexed_data_from_an_AWS_S3_bucket But we facing error like below? Any thought what might be the root cause? We did upload the data in the instructed directory but when rebuilding. we keep facing this error. 
How can I calculate CPU of the splunk server in percentage from the data in internal index? The data in internal index is as below where source = /opt/splunk/var/log/splunk/metrics.log 01-25-20... See more...
How can I calculate CPU of the splunk server in percentage from the data in internal index? The data in internal index is as below where source = /opt/splunk/var/log/splunk/metrics.log 01-25-2024 15:47:42.528 +0000 INFO Metrics - group=pipeline, name=dev-null, processor=nullqueue, cpu_seconds=0.001, executes=4445, cumulative_hits=9717713 01-25-2024 15:47:42.527 +0000 INFO Metrics - group=workload_management, name=workload-statistics, workload_pool=standard_perf, mem_limit_in_bytes=71715885056, cpu_shares=358 01-25-2024 15:47:42.525 +0000 INFO Metrics - group=conf, action=acquire_mutex, count=20, wallclock_ms_total=0, wallclock_ms_max=0, cpu_total=0.000, cpu_max=0.000
I have this vulnerability on all our instances on the last version of splunkforwarder The version of OpenSSL installed on the remote host is prior to 1.0.2zf. It is, therefore, affected by a vulne... See more...
I have this vulnerability on all our instances on the last version of splunkforwarder The version of OpenSSL installed on the remote host is prior to 1.0.2zf. It is, therefore, affected by a vulnerability as referenced in the 1.0.2zf advisory. identified in CVE-2022-1292, the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068) Any recommendation here
Hi all, We are going to the Splunk cloud and want to keep the LDAP search also in cloud. Today we have install the app on a search head and with working commands. I know how to forward the data ... See more...
Hi all, We are going to the Splunk cloud and want to keep the LDAP search also in cloud. Today we have install the app on a search head and with working commands. I know how to forward the data to splunk cloud from a HF, but what about the ldap command? Like ldapgroup etc? do we need to install the app in Cloud also to get the commands to work? //Jan
I have enterprise network and we have Splunk enterprise license.  Question: while troubleshooting source type or host if checking, it needs to show past history of particular user or source in the d... See more...
I have enterprise network and we have Splunk enterprise license.  Question: while troubleshooting source type or host if checking, it needs to show past history of particular user or source in the dashboard. Past history like how many alerts triggered the same user, those alert details if click the link it must be show past troubleshooting history.
"I need help with this XML for a dashboard; essentially, I need to call a token that modifies data within a report, having already created the token with the name 'data.' How can I do this?"   <for... See more...
"I need help with this XML for a dashboard; essentially, I need to call a token that modifies data within a report, having already created the token with the name 'data.' How can I do this?"   <form version="1.1">   <label>Lista IP da bloccare</label>   <fieldset submitButton="true" autoRun="false">     <input type="time" token="data">       <label></label>       <default>         <earliest>rt-24h</earliest>         <latest>rt</latest>       </default>     </input>   </fieldset>   <row>     <panel>       <table>         <search ref="checkpoint1"></search>         <option name="drilldown">none</option>       </table>     </panel>   </row> </form>
Hello, I'm looking of your insights to pinpoint changes in fields over time. Events structured with timestamp, ID, and various fields. Seeking advice on constructing a dynamic timeline to identify a... See more...
Hello, I'm looking of your insights to pinpoint changes in fields over time. Events structured with timestamp, ID, and various fields. Seeking advice on constructing a dynamic timeline to identify altered values and corresponding fields. Example events below:   10:20:30 25/Jan/2024 id=1 a=1534 b=253 c=384 ... 10:20:56 25/Jan/2024 id=1 a=1534 b=253 c=385 ... 10:20:56 25/Jan/2024 id=2 a="something" b=253 c=385 ... 10:21:35 25/Jan/2024 id=2 a="something" b=253 c=385 ... 10:22:56 25/Jan/2024 id=2 a="xyz" b="-" c=385 ... Desired result format: 10:20:56 25/Jan/2024 id=1 changed field "c" 10:22:56 25/Jan/2024 id=2 changed field "a", changed field "b" My pseudo SPL to find changed events: ... | streamstats reset_on_change=true dc(*) AS * by id | foreach * [ ??? ] With hundreds of fields per event, seeking efficient method - considering a combination of streamstats, foreach, transaction or stats. Insights appreciated.
Hi All, I  have created an alert that  looks for instances with no proper tags . The search in alert  will return instance name and  instance owner.  On scheduled time,  email notification is gett... See more...
Hi All, I  have created an alert that  looks for instances with no proper tags . The search in alert  will return instance name and  instance owner.  On scheduled time,  email notification is getting sent to all owners with the csv file attached.  I am using action.email.to=$result.email_address$ (dynamic email address returned from search). Through this, the email  notification is getting sent successfully to all users in $result.email_address$ but is getting sent separately. I want all of the users to be in to field , so that one email will be sent. Please let me know how we are achieving this ? Regards, PNV
Hello everyone ,   I need to onboard a huge amount of logs which the 90% of them is unnecessary . My goal is to ingest only some keywords like "Login Failed", "User Login " etc . I have seen other ... See more...
Hello everyone ,   I need to onboard a huge amount of logs which the 90% of them is unnecessary . My goal is to ingest only some keywords like "Login Failed", "User Login " etc . I have seen other articles  explaining how you can filter events by exclusion using NullQueue . But that doesn't fit in my case because I only know which event I want to ingest using particular keywords.  I am looking forward for a hint on how can I procced on that if it's possible .  Thank you all 
As mentioned in the subject, help me with the keyboard shortcut to format html and xml code in dashboard source code editor. For example, I want below code to be formatted to the code as shown in "T... See more...
As mentioned in the subject, help me with the keyboard shortcut to format html and xml code in dashboard source code editor. For example, I want below code to be formatted to the code as shown in "To" section: <dashboard version="1.1"> <label>Test Dashboard</label> <row> <panel> <html> <h1> <b>Some bold text</b> </h1> </html> </panel> </row> </dashboard>   To: <dashboard version="1.1"> <label>Test Dashboard</label> <row> <panel> <html> <h1> <b>Some bold text</b> </h1> </html> </panel> </row> </dashboard>   I tried using Ctrl+Shift+F but it is only formatting the XML code in the dashboard source code editor and html code in the dashboard source code editor is not getting formatted and remaining as is.
I want to know which saved search is generating a particular lookup , How do I do that?