All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, rex "WifiCountryDetails\W+(?<WifiCountryDetails>[\w*\s*]+)" We r using the above Rex for getting the Wi-Fi country details... But the problem is while fetching the data, if the Wi-Fi co... See more...
Hi all, rex "WifiCountryDetails\W+(?<WifiCountryDetails>[\w*\s*]+)" We r using the above Rex for getting the Wi-Fi country details... But the problem is while fetching the data, if the Wi-Fi country name is empty it automatically gathers the next field value in it.. But if the wificountrydetails are empty it has to show empty in the data, please let me know how to achieve it.
"Hi @bowesmana, thank you for your response. I need a regular expression to extract the correlation_id because I want to calculate the average time taken for two source events. The samples I provided... See more...
"Hi @bowesmana, thank you for your response. I need a regular expression to extract the correlation_id because I want to calculate the average time taken for two source events. The samples I provided are as follows: correlation_id: "['321e2253-443a-41f1-8af3-81dbdb8bcc77']" correlation_id: "11315ad3-02a3-419d-a656-85972e07a1a5" These are two format logs one is in array format and another normal value. Thanks in advance
Hi @vennemp  Were you able to get the issue fixed? I am facing the same issue with OKTA Splunk SAML integration.
Thank you for your message, I checked all occurrences of the original app name in all files and I found that in many .py files the original name was used, so i replaced those with the new name. Now... See more...
Thank you for your message, I checked all occurrences of the original app name in all files and I found that in many .py files the original name was used, so i replaced those with the new name. Now the inputs.conf seems to work and I can see logs, the issue I have now is that the UI of the app shows a new error:   "Configuration page failed to load, the server reported internal errors which may indicate you do not have access to this page." I am checking again what other replacements I need to do.  
Hi, How do I limit the results per host? I have any (random) search query. I have 10 hosts. For each hosts, hundreds of events are shown. In a statistics table, I want to show only 1 event, per host... See more...
Hi, How do I limit the results per host? I have any (random) search query. I have 10 hosts. For each hosts, hundreds of events are shown. In a statistics table, I want to show only 1 event, per host. This way, I can check if each host has the logfile. It doesn't matter what the contents of the logfile are. How do I perform this search? This statistics table, or splunk dashboard, will have the following function: Check if log exists on every server
As I always caution people in this forum, do not treat structured data such as JSON as text.  Regex is usually not the right tool. Is the illustrated JSON the raw event?  If so, Splunk should have g... See more...
As I always caution people in this forum, do not treat structured data such as JSON as text.  Regex is usually not the right tool. Is the illustrated JSON the raw event?  If so, Splunk should have given you a field named correlation_id of value ['321e2253-443a-41f1-8af3-81dbdb8bcc77'].  If it is part of a raw event that is compliant JSON, you need to show the full raw event - and Splunk should have given you a field named some_path.correlation_id.  If it is part of a raw event that is not JSON, you need to show the raw event so we can help you extract the JSON part, then you can use spath on the JSON part.  This is much more robust and maintainable than using regex on structured data.
Illustrating raw data is an improvement.  Now could you describe the desired outcome, perhaps with a mock table, and describe the logic between the sample data and desired result?
I must filter which Host get which Risk (Hosts can have multiple Risk values) and what risk is falling away on which date and what risk is new  You need to first refine your requirement to a p... See more...
I must filter which Host get which Risk (Hosts can have multiple Risk values) and what risk is falling away on which date and what risk is new  You need to first refine your requirement to a point you can mathematically, perhaps even visually represent the desired outcome. (This is really not about Splunk, but about data analytics.)  I cannot think of a single table to represent the above sentence.  Can you illustrate with a mock results table, and illustrate some mock data to derive that mock table?  Are you looking for multiple charts to represent each element in that sentence?
Let me see if I can restate the requirement correctly:  if IP values for a given Hostname overlap in the two lookups, that's a match unless lookup A contains only one single IP.  A mismatch happens i... See more...
Let me see if I can restate the requirement correctly:  if IP values for a given Hostname overlap in the two lookups, that's a match unless lookup A contains only one single IP.  A mismatch happens if there is zero overlap of IP for a Hostname in the two, or if lookup A contains a single IP for that Hostname. Mathematically, this translates into a test of unique values because if there is any overlap, total number of unique IPs must be smaller than the sum of unique IPs in each lookup.  Hence     | inputlookup lookup_A | stats values(IP) as IP_A by Hostname | inputlookup lookup_B append=true | stats values(IP) as IP_B by Hostname | eval IP = coalesce(IP_A, IP_B) | stats values(IP_A) as IP_A values(IP_B) as IP_B values(IP) as IP by Hostname | eval match = if(mvcount(IP_A) == 1 OR mvcount(IP) == mvcount(IP_A) + mvcount(IP_B), "no", "yes")     Your sample data gives Hostname IP_A IP_B IP match Host A 10.10.10.1 10.10.10.2 10.10.10.1 10.10.10.1 10.10.10.2 yes Host B 172.1.1.1 172.1.1.1 172.1.1.2 172.1.1.1 172.1.1.2 no Below is an emulation that you can play with and compare with real data     | makeresults | eval _raw = "Hostname,IP Host A,10.10.10.1 Host A,10.10.10.2 Host B,172.1.1.1" | multikv forceheader=1 | fields - _* linecount | stats values(IP) as IP_A by Hostname ``` above emulates | inputlookup lookup_A | stats values(IP) as IP_A by Hostname ``` | append [| makeresults | eval _raw = "Hostname,IP Host A,10.10.10.1 Host B,172.1.1.1 Host B,172.1.1.2" | multikv forceheader=1 | fields - _* linecount | stats values(IP) as IP_B by Hostname] ``` subsearch emulates | inputlookup lookup_B append=true | stats values(IP) as IP_B by Hostname ``` | eval IP = coalesce(IP_A, IP_B) | stats values(IP_A) as IP_A values(IP_B) as IP_B values(IP) as IP by Hostname | eval match = if(mvcount(IP_A) == 1 OR mvcount(IP) == mvcount(IP_A) + mvcount(IP_B), "no", "yes")    
Specifically listing them  using  the GET is proving troublesome. When I search the returned results, I don't find all alerts, but I do find all reports. The POST to create and alert is not an iss... See more...
Specifically listing them  using  the GET is proving troublesome. When I search the returned results, I don't find all alerts, but I do find all reports. The POST to create and alert is not an issue. 
Hello,  sorry for the missing information. i am realy new to splunk and its complicated with all parameters. I get one event per host per risk  Means the host with the IP 10.10.10.10 get scanned w... See more...
Hello,  sorry for the missing information. i am realy new to splunk and its complicated with all parameters. I get one event per host per risk  Means the host with the IP 10.10.10.10 get scanned with a vulnerability tool and after this i get a log with 20 different vulnerability events. Example maybe 2 with the risk classification Critical - 10 with the risk classification High and 8 with Medium. Every Risk is one event for this host means i get 20 different events on the same Host.   
Hello inventsekar, So let me try to explain this. We started to monitor our docker/containers on splunk. Unfortunately we cannot install splunk forwarders on container. So we using fluent bit to col... See more...
Hello inventsekar, So let me try to explain this. We started to monitor our docker/containers on splunk. Unfortunately we cannot install splunk forwarders on container. So we using fluent bit to collect all logs from container and grabing logs via httpeventcollector.  So we cannot use splunk on this scenario. When containers crushing they spamming exactly same logs(only timestamp different) like there is no tomorrow.  
Hi @ikulcsar, have you found a solution to the problem? I currently face a similar issue in Splunk 9.0.5 with an accelerated datamodel, completing 100% but with 0 byte size and no results while h... See more...
Hi @ikulcsar, have you found a solution to the problem? I currently face a similar issue in Splunk 9.0.5 with an accelerated datamodel, completing 100% but with 0 byte size and no results while having 30 buckets and the base-search is returing a million events and no errors.
Hi, I would like to implement the following behavior in Dashboard studio: when a user clicks on a line chart showing the trend of a flow in terms of error count, I would like to show in a drill down... See more...
Hi, I would like to implement the following behavior in Dashboard studio: when a user clicks on a line chart showing the trend of a flow in terms of error count, I would like to show in a drill down graph the trend of all the errors per that specific flow. I tried with the following token : flow= click.value2  but it does not work. Any hint? thank you Best Regards  
Hi team, I have currently configured my Otel Collector to send traces data from the adservice (Otel-demo service) to AppDynamics over a proxy. My problem is that AppDynamics doesn't show any ingeste... See more...
Hi team, I have currently configured my Otel Collector to send traces data from the adservice (Otel-demo service) to AppDynamics over a proxy. My problem is that AppDynamics doesn't show any ingested data in the Otel section (No Data available). The collector logs show no errors. This is my Collector config: config: | receivers: otlp: protocols: grpc: http: processors: resource: attributes: - key: appdynamics.controller.account action: upsert value: "company" - key: appdynamics.controller.host action: upsert value: "company.saas.appdynamics.com" - key: appdynamics.controller.port action: upsert value: 443 batch: send_batch_size: 90 timeout: 30s exporters: otlphttp: endpoint: "https://some-agent-api.saas.appdynamics.com" headers: {"x-api-key": "<some-api-key>"} logging: verbosity: detailed sampling_initial: 10 sampling_thereafter: 5 extensions: zpages: service: telemetry: logs: level: debug extensions: [zpages] pipelines: traces: receivers: [otlp] processors: [resource, batch] exporters: [logging, otlphttp] env: - name: HTTPS_PROXY value: proxy.company.com:8080
Good day, What screen do users get when they attempt to reply to a poll after clicking on the link to the poll, even if the maximum number of replies allowed is 100? If the poll has already reached ... See more...
Good day, What screen do users get when they attempt to reply to a poll after clicking on the link to the poll, even if the maximum number of replies allowed is 100? If the poll has already reached its maximum number of responses. Will they still have the opportunity to see the replies chart that illustrates how everyone else answered the questions? This is the outcome that I am hoping for. Many thanks
Hi, How are you? Thank you for the community! I have tried to search logs using API as per Creating searches using the REST API - Splunk Documentation this seems complex anyhow possible but by my ex... See more...
Hi, How are you? Thank you for the community! I have tried to search logs using API as per Creating searches using the REST API - Splunk Documentation this seems complex anyhow possible but by my experience this has been impossible for me until now. How to search in Splunk using the API? Here what I found https://community.splunk.com/t5/Building-for-the-Splunk-Platform/How-to-collect-debug-logs-for-apps-on-Splunk-Cloud/m-p/586144 . Kind regards, Tiago
how to make splunk rest api sid remains unchanged
Hi @_JP , Thanks for the reply but that's not what I'm looking for. I want the ability to list and create alerts, not view triggered alerts.
Hi @_JP, I am conceptually agree with you, but the customer already has logs on logstash and wants to use Enterprise Security, that uses CIM. For this reason I have to ingest and parse logstash dat... See more...
Hi @_JP, I am conceptually agree with you, but the customer already has logs on logstash and wants to use Enterprise Security, that uses CIM. For this reason I have to ingest and parse logstash data, trying to persuade customer to pass to Universal Forwarders. I asked to the Community if someone has already addressed this problem, to have some hint or attention point. Anyway, working by myself, I already reconducted some data flows to standard add-ons. Thank you for your answer. Ciao. Giuseppe