All Topics

Top

All Topics

Hello, I have a heavy forwarder install on a server to monitor a certain log file. We used to read that log just fine, but after some bug fixed about log generation(on server side) and that server ... See more...
Hello, I have a heavy forwarder install on a server to monitor a certain log file. We used to read that log just fine, but after some bug fixed about log generation(on server side) and that server restart, I can't read that log file at all.  Our inputs.conf was           [monitor:///data/ESB/ACH/LOG/] disabled = 0 sourcetype = napas.itso.app.achlog index = napas.ach.app.log           And it still can read all the logs file in there, but can't read the one that we need. I have restarted the agent, restart the server and restart splunkd connection but it still can't read the one that we need. We can read /data/ESB/ACH/LOG/iib_log_summary_2022-07-12.log but can't read /data/ESB/ACH/LOG/iib_log_detail_2022-07-12.log   We check the read permission on both file but they're the same. How can I troubleshoot it?
Hi, When I run a search against an index in smart/verbose mode, I am getting the below error with zero results, "Some events were removed by Timeliner because they were missing _time" However, ... See more...
Hi, When I run a search against an index in smart/verbose mode, I am getting the below error with zero results, "Some events were removed by Timeliner because they were missing _time" However, when the same query is run in fast mode I am seeing results. Is there anything wrong with the time of the logs coming in? How should I fix this?
Hi,  Recently my team setup our ITSI environment and had a few service templates and KPI base searches created. When i try to create a new service and link it to a service template i tell it to back... See more...
Hi,  Recently my team setup our ITSI environment and had a few service templates and KPI base searches created. When i try to create a new service and link it to a service template i tell it to backfill the KPIs. Everything up to this point looks great and the KPIs as well as the service analysis page display everything in the backfill as being healthy but nothing under the KPIs is displaying beyond the time i created the service and eventually everything returns to saying N/A.  Ive discovered that the KPI search acceleration using the summary index does not seem to be working. When i run the raw search everything will display. How do i get the itsi_summary index to be populated with the correct information? 
Hi, i have event like vuln {     host: some_host     cve: {         base_score: 10         description: "Really nasty"         references: [link_1                                 link_2]... See more...
Hi, i have event like vuln {     host: some_host     cve: {         base_score: 10         description: "Really nasty"         references: [link_1                                 link_2]     }    remediation: {      something: "something"    } } Now I don't want table where json key is Column name but as Event raw in Table . I would like to have dynamic Table something like Name               | Value base_score   | 10 description   |  Really nasty references    | link_1, link_2
Hello, I have a table on my splunk dashboard. I have a search to get the data and I am using fields to filter out the columns that I want to see on my dashboard. I observed that, even if I filte... See more...
Hello, I have a table on my splunk dashboard. I have a search to get the data and I am using fields to filter out the columns that I want to see on my dashboard. I observed that, even if I filter out a couple of columns to display, I am still able to use the other column data (column c or d from below example) that I am not displaying for other purpose. Below is the format that I am using for a splunk table.   <search> <query>| dbxquery connection=connection query="SELECT a, b, c, d FROM t1" </query> </search> <fields>["a", "b"]</fields>   Now, I have a requirement to create a custom Splunk table using JavaScript. I was able to do the same but I am not sure how to use the <fields> provided by Splunk in JavaScript. The problem that I observed with JavaScript table is that, If I am not displaying a particular column, I will not be able to use that column data for other operations. I basically want to display just a, b columns from my JavaScript query but at the same time utilize the c, d column data for other operations. Kindly suggest a solution for this.
Hey Splunkers! We have an exciting update lined up for you this month focusing on two major new tools that have just been released on Splunk Lantern - the Use Case Explorer for Security, and the Use ... See more...
Hey Splunkers! We have an exciting update lined up for you this month focusing on two major new tools that have just been released on Splunk Lantern - the Use Case Explorer for Security, and the Use Case Explorer for Observability. Why use the Use Case Explorers? When you were new to Splunk, you likely started using our platform with one, two, or a handful of use cases you wanted to achieve. You'll have worked to get those use cases activated and to see value from your Splunk investment.  But your needs, over time, will change as your function or organization grows and matures. The use cases you bought Splunk for initially might change or become less important, and you might find you need some help identifying where to focus next. Or you might find that the ways you accomplished your original use cases aren’t as effective as they were at the start, and you need to adopt more efficient processes that fit you better as you scale. The Use Case Explorers for Security and Observability are designed to help in both of these scenarios - giving you a ‘color-by-numbers’ on how to grow and improve your Splunk usage throughout your journey to build a mature Security or Observability function. The Use Case Explorers are the result of months of hard work by expert Splunkers with decades of industry experience, and who know first-hand what success in Security and Observability looks like. We’ve mapped out a journey based on how we know top customers use Splunk to grow, while also drawing on guidance from industry analysts such as Gartner and best-practice tools like the MITRE ATT&CK framework, to help you see how Splunk can partner with you throughout your journey. So are you ready to take a look? Let’s explore how these tools work. How to use the Use Case Explorers Each of the Use Case Explorers uses a map to provide a framework for your Security or Observability journey. You’ll be able to use the map to identify where you are currently, as well as where you want to go. Different use cases are recommended at different stages of the map, and just like any other Lantern article, we give you the exact procedures you need to follow to get them implemented. We’ve developed SPL snippets, videos and step-by-steps that make it easy for you to get your use cases up and running quickly and efficiently. Identifying new use cases is one thing, but implementing them in your organization can be a larger task. That’s why we’ve also created the Value Realization Cycle, a procedure you can follow to see continued success with new use cases, and the Use Case Registry, a tracking template you can use to help you roll out your new use cases. Select “Click here if it's your first time to learn how to use it” from the Use Case Explorers for Security or Observability to access them. Finally, if you want to see how everything looks in action, you can follow along with the fictional organization CS Corp to see how they use the Use Case Explorer to implement new use cases and grow their business. Click through to see the examples for Security or Observability. The Use Case Explorer for Security Here’s the map for the Use Case Explorer for Security. Across the top are workflow stages (Ingest Data, Monitor, Analyze and Investigate, and Act), and below each are focal areas that contain use cases and best practice guidance which you can start to apply right away. The Use Case Explorer for Observability The Use Case Explorer for Observability aligns to Gartner's industry-defined AIOps framework that helps define your journey. The workflow stages are Observe, Engage, and Act, each containing focal areas with use cases and best-practice guidance. What’s next? Click into our Use Case Explorers and let us know what you think! If you’re logged into Lantern with your Splunk account, you can leave feedback at the bottom of each page. We’re planning to continue building out the Use Case Explorers over the coming weeks, so keep checking back for more use cases and content.  Finally, thinking about Lantern as a whole, we’re looking to get your ideas on the type of content you’d like to see on Lantern in the future. Click through to one or more of the following anonymous surveys to tell us what you want to see more content on: Share your Security ideas Share your Observability ideas Share your product ideas We hope you’ve found this update helpful. Thanks for reading! - Kaye Chapman, Customer Journey Content Curator
I am trying to include dynamic names for a notable event that I have triggering. When I try to use $variable$ it just shows that and does not pull the field value. My search: index = o365 sourcet... See more...
I am trying to include dynamic names for a notable event that I have triggering. When I try to use $variable$ it just shows that and does not pull the field value. My search: index = o365 sourcetype="mscs:azure:eventhub" body.operationName="User Risk Detection" "body.properties.riskLevel"=high | rename body.properties.userDisplayName AS "Display Name"   My Name: Office 365 Risky User Detected - $Display Name$ Can anyone help?  
splunk-winevtlog.exe crash, low thruput, high cpu  utilization and eventcode filtering not working as expected with 8.1.x/8.2.x/9.0
[Blog Post] Differentiate various size related parameters from indexes.conf maxTotalDataSizeMB maxGlobalRawDataSizeMB maxGlobalDataSizeMB maxDataSize maxVolumeDataSizeMB
Hi All,  is there a way to query the API just to return custom metrics that have been setup in an application/tier/node? Use Case: We would like to know how many custom metrics have been created pe... See more...
Hi All,  is there a way to query the API just to return custom metrics that have been setup in an application/tier/node? Use Case: We would like to know how many custom metrics have been created per Application and understand if this metric is actually useful and being used. Having 100,000's of custom metrics sitting on the controller is not good practice I would imagine. We are using a SaaS controller.  Thanks if anybody has an answer or suggestion.
I am trying to enable Server Certificate Hostname Validation in the server.conf file and I literally cut and pasted the command  sslVerifyServerName = true # turns on TLS certificate host name valid... See more...
I am trying to enable Server Certificate Hostname Validation in the server.conf file and I literally cut and pasted the command  sslVerifyServerName = true # turns on TLS certificate host name validation from the Splunk documentation and when I restart Splunk on this on prem deployment server it says : WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details.   now I get the CLI command is cliVerifyServerName instead of sslVerifyServerName, but I even tried having both lines there and it still does not like it I have issued an Enterprise web certificate to this server, it is still valid for two years, so I am at a total loss here please help  
Hello,  I have several events in the _raw field that add a unique identification number. I would like to replace these with something standard to aggregate counts on.  Example Data: fi/transact... See more...
Hello,  I have several events in the _raw field that add a unique identification number. I would like to replace these with something standard to aggregate counts on.  Example Data: fi/transaction/card/purchase/tx_2994882028948/refund fi/transaction/card/purchase/tx_3920496893002/void fi/transaction/card/purchase/tx_2930540482198/refund   I'd like these all to read:  fi/transaction/card/purchase/trans/refund fi/transaction/card/purchase/trans/void fi/transaction/card/purchase/trans/refund   So replace the unique identifier, but maintain the verbiage at the end.  I've tried a few of the other methods noted in other threads, but to no avail. Some don't work at all, some run, but don't replace the values. Thanks!!
Hello community I am looking at TA/apps and trying to figure out what to use, where to use it and how to use it optimally. Potentially there are some best practices for this? Just to set the stag... See more...
Hello community I am looking at TA/apps and trying to figure out what to use, where to use it and how to use it optimally. Potentially there are some best practices for this? Just to set the stage, let’s take as an example. Wanting to collect only some logs from hosts using a universal forwarder, using the “Splunk_TA_nix” and setting up a “/local/inputs.conf”, cherry-picking a few sources/folders from default/inputs.conf seems reasonable. Though pushing the “entire” app to a UF seems a bit “overkill”. There should be lots of things not being used in a setup like this. Not all scripts will be needed etc. Then on the search heads, field extraction should be performed for these sources. I assume you need a/the app installed on the these as well for search time extraction. However, things like “/bin”, inputs.conf, outputs.conf, etc. seem unnecessary. Generally it seems like keeping excerpts from props.conf and transforms.conf could suffice? I should formulate a question which can be answered. Say I have a few logs/sources being indexed and searchable. On the search heads, assuming I am only interested in field extraction for the affected source types, no dashboards, reports, log collection on SH etc. Should I just install the entire application/TA regardless of how much or little of it will actually be used? Or, can I remove things which fills no purpose for function outlined? If so, what is the most efficient way to sort out which things to keep and what to discard (like will it always be enough to keep props.conf/transforms.conf)? Looking forward to your feedback regarding this, all the best
Hello Some users in my system does not have the data summary button (each one has different role) How can I enable the data summary for them ?   Thanks
I'm looking for a way to collect all custom lists.  While I can do so individually for every Custom List with `phantom.get_list()` I still need to have their names to make use of this function. So, i... See more...
I'm looking for a way to collect all custom lists.  While I can do so individually for every Custom List with `phantom.get_list()` I still need to have their names to make use of this function. So, is there a way to get all Custom Lists names, or Custom Lists' contents?  As a workaround I tried making request to "/rest/decided_list", but it doesn't return everything that is accessible through phantom itself. 
Hi all. I want to create an alert for hosts file modification. Found the build in one here on the forums but I would like to add a filter that can read inside the file and when it's being modified ... See more...
Hi all. I want to create an alert for hosts file modification. Found the build in one here on the forums but I would like to add a filter that can read inside the file and when it's being modified by Docker, it would ignore and won't activate the alert.   Appreciate the assistance!
Hi I am a young man of 15 passionate about cybersecurity, hacking... I don't have much experience other than some book I've read. I heard about your training for young people aged 12 to 16 in UTC ... See more...
Hi I am a young man of 15 passionate about cybersecurity, hacking... I don't have much experience other than some book I've read. I heard about your training for young people aged 12 to 16 in UTC for secondary school. I would like to know more about this. Thank you in advance for your answer, Bilal
I uploaded a new version of my app to splunkbase.  It passed validation, and I was able to update successfully from my splunk enterprise test instance.  However, my splunk **cloud** instance is not... See more...
I uploaded a new version of my app to splunkbase.  It passed validation, and I was able to update successfully from my splunk enterprise test instance.  However, my splunk **cloud** instance is not detecting the new version and offering to update.  It did pull the new version if I uninstalled and reinstalled the app, it just isn't detecting it as an update from the older version to the latest version.  The is a problem since my current customers aren't receiving this bugfix release.  Any help is appreciated.
I have a URL as below 1.aa/bb/cc/dd 2.nbcn/hbd/hvhd/hbxn   Need to regular expression to get the below output 1.aa/bb 2.nbcn/hbd
Hi, For testing purposes, we are trying to use the Logstash client command line to send data to a Splunk server instance. The client configuration is the following (based on the steps found in http... See more...
Hi, For testing purposes, we are trying to use the Logstash client command line to send data to a Splunk server instance. The client configuration is the following (based on the steps found in https://docs.splunk.com/Documentation/SplunkCloud/8.2.2203/Data/UsetheHTTPEventCollector) and it uses an HTTP Event Collector token:   input {       stdin       {             codec => json       } }   output {       http       {             format => "json"             http_method => "post"             url => "https://prd-p-kz4cj.splunkcloud.com:8088/services/collector/raw"             headers => ["Authorization", "Splunk 26c964a2-c1e8-46e8-96ca-679d3b7542bd"]       }              stdout       {       } }     However, it fails with the following error:   [ERROR] 2022-07-01 14:28:12.794 [[main]>worker5] http - Could not fetch URL {:url=>"https://prd-p-kz4cj.splunkcloud.com:8088/services/collector/raw", :method=>:post, :message=>"Connect to prd-p-kz4cj.splunkcloud.com:8088 [prd-p-kz4cj.splunkcloud.com/54.167.42.214] failed: connect timed out", :class=>Manticore::ConnectTimeout, :will_retry=>true}   And updating the url above to https://prd-p-kz4cj.splunkcloud.com/en-US:8088/services/collector/raw produces a different error:   [ERROR] 2022-07-01 14:34:26.702 [[main]>worker4] http - Encountered non-2xx HTTP code 404 {:response_code=>404, :url=>"https://prd-p-kz4cj.splunkcloud.com/en-US:8088/services/collector/raw", :event=>#<LogStash::Event:0x199312cb>}     Also, we have noticed that a ping to the Splunk server instance does not return a reply:   ping prd-p-kz4cj.splunkcloud.com PING prd-p-kz4cj.splunkcloud.com (54.167.42.214) 56(84) bytes of data.     Could it be that a server configuration setting is missing? Kind regards,       Moacir Silva