All Topics

Top

All Topics

I have this rule, I need it to trigger when results / count of events is greater than 4 but the "Trigger Condition" did not work. Is there something I can add to the query ?   
Hi All, I have a question. What exactly 'Dispatch_rest_to_indexers' mean ? I am getting warning when running rest command and I am on splunk cloud. Restricting results of the "rest" operator t... See more...
Hi All, I have a question. What exactly 'Dispatch_rest_to_indexers' mean ? I am getting warning when running rest command and I am on splunk cloud. Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability. I see many blogs talking about this message but I did not come across clear explanation on what does this parameter exactly mean ? Dispatch_rest_to_indexers . What does this exactly do ? Please can anyone throw some light on this . Thanks in Advance, PNV
I am getting below error while making mssql connection with db connect 3.15.0 HTTPConnectionPool(host='127.0.0.1', port=9998): Read timed out. (read timeout=310) Can anyone help me out.
Looking at the Splunk add on for Cyber Ark, it appears the process is flawed in that the Cyber Ark supplied ./Syslog/RFC5424Changes.xsl fragment generates a syslog timestamp from the first syslog/a... See more...
Looking at the Splunk add on for Cyber Ark, it appears the process is flawed in that the Cyber Ark supplied ./Syslog/RFC5424Changes.xsl fragment generates a syslog timestamp from the first syslog/audit_record/IsoTimestamp  but the code in forExport/SplunkCIM.xsl then generates multiple CEF-like events on a 'single line' for, the possibly multiple, audit_record's and hold no timestamps Thus if the XSLT iterates over more than one event, not only do the timestamps for the individual events get discarded, one possibly ends up with a single CEF like event with multiple key value pairs where the keys are repeated. Basically it appears that multiple Cyber Ark events are concatenated together into one syslog record without any clear form of event separation and the timestamps for the 2nd and subsequent events are lost.
splunk environments with High CPU usage 
Hello Splunkers! I am downloading the eStreamer TA for Splunk (Cisco Secure Firewall app), and I am facing the issue below:   /opt/splunk/etc/apps/TA-eStreamer/bin/encore$ openssl pkcs12 -in clien... See more...
Hello Splunkers! I am downloading the eStreamer TA for Splunk (Cisco Secure Firewall app), and I am facing the issue below:   /opt/splunk/etc/apps/TA-eStreamer/bin/encore$ openssl pkcs12 -in client.pkcs12 -nocerts -nodes -out "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/10.1.50.10-8302_pkcs.key" Enter Import Password: Error outputting keys and certificates 802B49170A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:../crypto/evp/evp_fetch.c:349:Global default library context, Algorithm (RC2-40-CBC : 0), Properties ()   I understand that this error is related to Python package, but I can see that Python is already installed: Can anyone help me?
--updated this question to achieve the same behavior on DS Dashboard Hello,  I have a table viz on my dashboards (simple XML and DS Dashboards) - sample data as given below.     | makeresults ... See more...
--updated this question to achieve the same behavior on DS Dashboard Hello,  I have a table viz on my dashboards (simple XML and DS Dashboards) - sample data as given below.     | makeresults format=csv data="cust, sla Cust1,85 Cust2,96 Cust3,99 Cust4,89 Cust5,100" | fields cust, sla       How can I colour code "sla" column based on given conditions in both Simple XML (without using javascript) and DS dashboards? if (cust IN (Cust1,Cust3,Cust4) AND sla>=90) OR (cust IN (Cust2,Cust5) AND sla>=95) -> Green if (cust IN (Cust1,Cust3,Cust4) AND sla>=85 AND sla<90) OR (cust IN (Cust2,Cust5) AND sla>=90 AND sla<95) -> Amber if (cust IN (Cust1,Cust3,Cust4) AND sla<85) OR (cust IN (Cust2,Cust5) AND sla<90) -> Red Thank you. Regards, Madhav  
Hi Everyone, We're in the process of updating the SSL certificates on our Splunk servers. However, when attempting the upgrade, we encounter the following error: "Cannot decrypt private key in "/op... See more...
Hi Everyone, We're in the process of updating the SSL certificates on our Splunk servers. However, when attempting the upgrade, we encounter the following error: "Cannot decrypt private key in "/opt/splunk/etc/apps/*/local/Splunk.key" without a password. Network communication with splunkweb may fail or hang. Consider using an unencrypted private key for Splunkweb's SSL certificate." Could anyone provide assistance with this issue? Below are the steps we followed while generating the certificate. Please let us know if you spot any mistakes. We're running Splunk 9.0.0. ## Go to /root/certs/ cd /root/certs/ ## Create new directory for the certs mdkir certs_2024 ## Create Server Key openssl genrsa -des3 -out splunk.key 2048 password123###### password123###### ## Create a No Pass Key openssl rsa -in splunk.key -out splunk.nopass.key enter passphrase - <<<password>>> ## Generate the csr file openssl req -new -sha256 -key splunk.nopass.key -out splunk.csr  once we get the certificate, we are running the below steps.  vi end_entity_cert <<paste the end_entity_cert value for the hostname and save>> vi intermediate_cert <<paste the intermediate_cert value for the hostname and save>> cp splunk.nopass.key /opt/splunk/etc/apps/App_hostname_ssl/local Go to certificates folder - cd /home/Splunk/certs_renewal/ Copy the rootCA.pem into /opt/splunk/etc/apps/App_hostname_ssl/local ## Create Certificate Chain cat end_entity_cert splunk.key intermediate_cert rootCA.pem >>full.pem ## Verify Certificate Validity openssl x509 -enddate -noout -in full.pem ./splunk restart  
Hello, I have some issues with parsing events and a few sample events are given below: {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:10:15", ... See more...
Hello, I have some issues with parsing events and a few sample events are given below: {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:10:15", "statusCode":"active"} {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:09:11", "statusCode":"active"} {"eventVer":"2.56", "userId":"A021", "accountId":"Adm02", "accessKey":"26dsaa", "time":"2023-12-03T09:09:08", "statusCode":"active"} {\"eventVer\":\"2.56", "userId":"B001", "accountId":"Test04", "accessKey":"21fsda", "time":"2023-12-03T09:09:04", "statusCode":"active"} {\"eventVer\":\"2.56", "userId":"B009", "accountId":"Adm01", "accessKey":"21assaa", "time":"2023-12-03T09:09:01", "statusCode":"active"} {"eventVer":"2.56", "userId":"B023", "accountId":"Adm01", "accessKey":"30tsaa", "time":"2023-12-03T09:08:55", "statusCode":"active"} {"eventVer":"2.56", "userId":"A025", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:08:51", "statusCode":"active"} {"eventVer":"2.56", "userId":"C015", "accountId":"Dev01", "accessKey":"41scab", "time":"2023-12-03T09:08:48", "statusCode":"active"} The event breaking point is marked as Bold and I used  LINE_BREAKER=([\r\n]*)\{"eventVer":" in my prop.conf file, but not parsing all events as expected. Any recommendations will be highly appreciated. Thank you.
Currently, each of my indexes is set to a specific and own frozenTimePeriodInSecs, but I am noticing they are not rolling over to cold when the frozenTimePeriodInSecs value is set. Data Age (keeps g... See more...
Currently, each of my indexes is set to a specific and own frozenTimePeriodInSecs, but I am noticing they are not rolling over to cold when the frozenTimePeriodInSecs value is set. Data Age (keeps growing) vs Frozen Age (stays as what it is set in frozenTimePeriodInSecs) maxWarmDBCount is set to:   maxWarmDBCount = 4294967295    Does this effect? If the value is changed, would data roll to cold?
I have the following source .I want to extract time from source when data is ingesting   source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log"   in props    TRANSFORMS-set_time =source_... See more...
I have the following source .I want to extract time from source when data is ingesting   source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log"   in props    TRANSFORMS-set_time =source_path_time     In transforms    [set_time_from_file_path] INGEST_EVAL = | eval _time = strptime(replace(source, ".*/ute-(\\d{4}-\\d{2}-\\d{2}[a-z])/([^/]+/[^/]+).*","\1"),"%y-%m-%d_%H-%M-%S")       I tried testing it but I am unable to get the _time   | makeresults | eval source="/logs/gs/ute-2024-02-05a/2024-02-05_16-17-54/abc.log" | fields - _time ``` above set test data ``` | eval _time = strptime(replace(source, ".*/compute-(\\d{4}-\\d{2}-\\d{2}[a-z])/([^/]+/[^/]+).*","\1"),"%y-%m-%d_%H-%M-%S")     Thanks in Advance
this is my splunk query:   this is output: how can I freeze this 1st column where interface names are showing. problem is when dragging to right then can not see the interface name ... See more...
this is my splunk query:   this is output: how can I freeze this 1st column where interface names are showing. problem is when dragging to right then can not see the interface name below is source code:  
Is there such a thing as a Spunk AI forwarder that is placed on a device that you can control through biometics the flow of data? Or a Smart Forwarder?  
We can not choose default source type _json while onboarding. Need to extract the json data within the log file, which is essential for an app owner. log format - 2024-01-01T09:50:44+01:00 hostname... See more...
We can not choose default source type _json while onboarding. Need to extract the json data within the log file, which is essential for an app owner. log format - 2024-01-01T09:50:44+01:00 hostname APP2SAP[354]: {JSON data} I have a splunk intermediate forwarder read these log files. Log file has non-json data followed by json data which bread n butter for application team (log format as shown above). If I forward the data as-is to splunk, extraction is not proper, since it has non-json data at beginning. Now, I need props n (or) transforms to extract, which I am not sure how.
Using SOAR export app in Splunk, we are pulling certain alerts to SOAR. Depending on the ip, the artifacts are grouped to a single container. Now I need to create 1 ticket for each container using pl... See more...
Using SOAR export app in Splunk, we are pulling certain alerts to SOAR. Depending on the ip, the artifacts are grouped to a single container. Now I need to create 1 ticket for each container using playbook. But what happens is that if the container is having multiple artifacts, it creates 1 ticket for each artifact. Any idea on how to solve this? Phantom  Splunk App for SOAR Export 
Hi Folks,   I have a quick question. currently I have a syslog event and I need to see in splunk the raw data the info in different order: Example original syslog (?<field1>REGEX),(?<field2>REG... See more...
Hi Folks,   I have a quick question. currently I have a syslog event and I need to see in splunk the raw data the info in different order: Example original syslog (?<field1>REGEX),(?<field2>REGEX),(?<field3>REGEX),  etc....... what I want to see indexed in splunk (?<field1>REGEX),(?<field3>REGEX),,(?<TIMESTAP>REGEX),(?<field2>REGEX). I tried with SED command in props.conf is really useful to clean the data but not to reorder the info.   Thanks in advance Alex  
I have a saved "MySearch" that takes a parameter "INPUT_SessionId", something like this: index=foo | ... some stuff | search $INPUT_SessionId$ | ... more stuff And then "MySearch" invoked like... See more...
I have a saved "MySearch" that takes a parameter "INPUT_SessionId", something like this: index=foo | ... some stuff | search $INPUT_SessionId$ | ... more stuff And then "MySearch" invoked like this | savedsearch "MySearch" INPUT_SessionId="abc123" My challenge is that sometimes me & my users accidentally invoke with curly braces around the SessionId (it's a long story), like this: | savedsearch "MySearch" INPUT_SessionId="{abc123}" When invoked this way, the search produces no results, which is confusing for user until they realize they accidentally included curly braces. I'd like to change things inside of "MySearch" so that it strips curly braces from $INPUT_SessionId$ before continuing to use the value. For a typical field value I know how to use trim like | eval someField=trim(someField, "{}") How do I do something like trim() but on the value of the parameter $INPUT_SessionId$ ?
here is the current data   Feb 27 14:12:38 node0: -------------------------------------------------------------------------- Attack database version:3670(Thu Feb 22 14:12:38 2024 UTC) Detector ... See more...
here is the current data   Feb 27 14:12:38 node0: -------------------------------------------------------------------------- Attack database version:3670(Thu Feb 22 14:12:38 2024 UTC) Detector version :12.2.140230313 Policy template version :3535 node1: -------------------------------------------------------------------------- Attack database version:3670(Thu Feb 22 14:12:38 2024 UTC) Detector version :12.2.140230313 Policy template version :3535 {primary:node0}     i need help extracting the values for attack version (just the digit), detector version and policy template version, by node (ie: node 0 and node 1)   output looks like something like this   Node               Attack database version                 Detector version                Policy template version node0             3670                                                         12.2.140230313               3535 node1             3670                                                         12.2.140230313               3535     please and thank you, i am only able to get the node0 but not node1
Hi all, I have a dashboard that monitors deploys, and a table that tracks all info related to any given deploy.  I have a column labeled "pull request urls" that populates with the Github link relat... See more...
Hi all, I have a dashboard that monitors deploys, and a table that tracks all info related to any given deploy.  I have a column labeled "pull request urls" that populates with the Github link related to the deploy, and I made it clickable with a drilldown.  However, for local deploys with no link, it populates "N/A", and I would like to make that text excluded from the drilldown.  Is there anyway to exclude certain strings from drilldowns? Drilldown is copied below. <option name="drilldown">cell</option>      <drilldown>      <condition field="PULL_REQUEST_URL">           <!-- this condition field can be modified based on column header -->           <link target="_blank">$click.value2|n$</link>      </condition>      <condition>           <!-- keep this blank -->      </condition> </drilldown>
Raise your hand if you’ve already forgotten your username or password when logging into an account. (We can’t actually see if you’re raising it but we’re assuming you did.) No more at Splunk! Now... See more...
Raise your hand if you’ve already forgotten your username or password when logging into an account. (We can’t actually see if you’re raising it but we’re assuming you did.) No more at Splunk! Now, you can use your Splunk Cloud Platform as your identity provider (IdP) to sign in to Splunk Observability Cloud. This means that you don’t need to have a separate username or password to log into your Observability Cloud account, instead, you can use the credentials from Splunk Cloud Platform to get to your Splunk APM, Infrastructure Monitoring, RUM or Synthetics product for a more seamless login experience. And yes, you can now use a Single Sign On (SSO) to log into Splunk Observability Cloud, isn’t that better? With Unified Identity, you get the following Faster Login Experience: You can now use the same credentials to log in to Splunk Cloud and Splunk  Observability Cloud via SSO Secure and bi-directional data access: by making Splunk Cloud your main identity provider, your Splunk Cloud’s role-based access control is honored in Splunk Observability Cloud Faster Mean Time to Resolve: As Splunk Cloud and Splunk Observability Cloud users, you can access both platform’s data seamlessly based on your needs for a more unified and flexible experience Check out the documentation where you can see how your Splunk Admins can enable this feature for you in just a few minutes. Important note, the AWS region for your Splunk Cloud Platform instance must be the same as your Splunk Observability Cloud instance realm. Have specific questions? Reach out to your representative who will guide you through the process of setting up “Unified identity” to your organization. Keep on Splunking!