All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I installed Universal Forwarder On Linux Machine and integrate it with Splunk , but their is no logs returned on Splunk Search Head ,  as per your Knowledge I`m currently working on distributed Splun... See more...
I installed Universal Forwarder On Linux Machine and integrate it with Splunk , but their is no logs returned on Splunk Search Head ,  as per your Knowledge I`m currently working on distributed Splunk Enterprise .   Any Recommendations ?
Hi i'm using splunk 8.0.4 and when i use mpreview, return Unknown search command 'mpreview'. Any idea? Thanks
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. A Rolling restart and CM  restart (splunkd) had no effect. Got 3 SF... See more...
Hi all, Im under Splunk Version 9.0.2. After decomissionning one indexer in a multi site clustering, I cant retrieve my SF / RP. A Rolling restart and CM  restart (splunkd) had no effect. Got 3 SF tasks in pending with the same message : Missing enough suitable candidates to create a replicated copy in order to meet replication policy. Missing={ site2:1 } I have tried Resync and roll it with no success.  In the details of the pending task, I can see that de bucket is only on one indexer, and not searchable on other indexers of the cluster. My SF = 2 and RF = 2. Id like to be clean before decomissionning the next indexer.  Any advice or help will be hightly appreciate in order to retrieve my SF/RP (it is a production issue) Thanks by advance
So, I've been away from Splunk for several years now, and now re-visiting it.  I've got a scenario where I would like to track certain metrics from imported data.  I created a simple CSV with just a ... See more...
So, I've been away from Splunk for several years now, and now re-visiting it.  I've got a scenario where I would like to track certain metrics from imported data.  I created a simple CSV with just a few entries to demonstrate the issues I'm having. Below is the source data I created: customer_id Time customer_fname customer_lname products product_prices 111 12/1/2023 John Doe product_100,product_200 100,200 222 12/11/2023 Suzy Que product_100 100 333 12/15/2023 Jack Jones product_300 300 111 12/18/2023 John Doe product_400 400   In this scenario this is just examples of customers and the items they purchased and the price paid.   After uploading the file and displaying the data in a table it looks as expected: source="test_sales.csv" | table customer_id,customer_fname,customer_lname,products,product_prices Upon using makemv to convert "products" and "product_prices" to multi-value fields, again the results are as expected and the product and price align since they were input into the source CSV in the proper order: source="test_sales.csv" | makemv delim="," products | makemv delim="," product_prices | table customer_id,customer_fname,customer_lname,products,product_prices   Here is where my issue is,  Is there a way to tie the product for a purchase transaction  in the multi-value "products" column to it's corresponding price in the multi-value "product_prices" column? Everything seems to work except when I try to so something like listing the products by price for the multi-value fields like this: source="test_sales.csv" | makemv delim="," products | makemv delim="," product_prices | stats count(products) by products,product_prices In the above results you can see that I'm getting results that are not exactly what I would want.  Ex.  it shows: 3 instances of product_100 at a price of 100, should only be 2 instances 2 instances of product_100 at a price of 200, should be 0 instances of this combination 2 instances of product_200 at a price of 100, should be 0 instances of this combination 2 instances of product_200 at a price of 200, should only be 1 instance   I'm likely approaching this incorrectly or using the wrong tool for the task,  any help to get me on the right track would be appreciated.   Thanks
Hi Team, I have two dashboards designed for specific sets of locations. My plan is to consolidate them into a single dashboard, utilizing filters to distinguish between different locations. For ins... See more...
Hi Team, I have two dashboards designed for specific sets of locations. My plan is to consolidate them into a single dashboard, utilizing filters to distinguish between different locations. For instance, locations aaa, bbb, ccc, and ddd pertain to the inContact application, while locations eee, fff, ggg, and hhh belong to the Genesys application. Location value is in marketing area field marketing-area": "aaa"  I need to enable a filter for both inContact and Genesys. Clicking on inContact should display relevant locations, and similarly, clicking on Genesys should show the corresponding locations. Please let me know if any input required.
Hi all,   I am coming from Splunk on-prem so this is a bit confusing to me. I have looked at architectures regarding Splunk Cloud and can't understand how data configs are done when using Splunk Cl... See more...
Hi all,   I am coming from Splunk on-prem so this is a bit confusing to me. I have looked at architectures regarding Splunk Cloud and can't understand how data configs are done when using Splunk Cloud. For example, let's say:   - You have a UF on a machine that forwards data to Splunk Indexers (cloud), you are to make a custom sourcetype for this specific piece of data. Where would you define the parsing rules for this if you don't manage the Indexers. Furthermore if the data can be on-boarded with a TA, how would you install this TA onto the indexers to assist with onboarding (assuming no need for HF)     Any help would be appreciated, thanks!
Hello. Im using Splunk cloud and thinking about add summary index or data model. I'm trying to understand the difference between the 3 options : summary index, report acceleration and data model. ... See more...
Hello. Im using Splunk cloud and thinking about add summary index or data model. I'm trying to understand the difference between the 3 options : summary index, report acceleration and data model. Can someone please explain to me what is the main purpose of each ? Using summary index is the best way to avoid performance issues with heavy searches ? How it works with summary index? should i create new index and run my dashboards on this index ? Thanks
Hello, I noticed that in versions upper 9.1, the user and group were changed to "splunkfwd" I have updated the universal forwarder to the newer version (9.1), but the user and group did not chang... See more...
Hello, I noticed that in versions upper 9.1, the user and group were changed to "splunkfwd" I have updated the universal forwarder to the newer version (9.1), but the user and group did not change to "splunkfwd." Subsequently, we encountered several problems related to permissions, such as the Universal Forwarder lacking permission to read auditd logs. Therefore, it is necessary to modify the "log_group" parameter in the auditd.conf file. Should I manually change it, or is there an alternative solution to resolve all permission problems?
Hello Community, I have a challenge finding and isolating the unique hosts out of two sources (DHCL and SysMon in my case) I did try the following but it did work as expected: EXAMPLE 1: index=dh... See more...
Hello Community, I have a challenge finding and isolating the unique hosts out of two sources (DHCL and SysMon in my case) I did try the following but it did work as expected: EXAMPLE 1: index=dhcp_source_index | stats count by host | eval source="dhcp" | append [ search index=sysmon_index | stats count by host | eval source="sysmon" ] | stats values(source) as sources by host | where mvcount(sources)=1 AND sources="dhcp"   EXAMPLE 2: index=my_index | dedup host, source | stats list(source) as sources by host | append [search index=my_index | stats latest(_time) as last_seen by host] | eventstats max(last_seen) as last_seen by host | where mvcount(sources)=1 | table host, last_seen The numbers from the manual findings and the above SPLs differ Thanks in advance
Hi all, I'm actually have to decomission 6 indexers on a 9/9 multi site cluster of indexers. The command passed : splunk offline --enforce-counts 3 days have passed, and im still having a lar... See more...
Hi all, I'm actually have to decomission 6 indexers on a 9/9 multi site cluster of indexers. The command passed : splunk offline --enforce-counts 3 days have passed, and im still having a large amount of buckets for the offlined indexer. Buckets dont reduce... or a very little amount. The Indexer is still in "Decomissionning" status in the Cluster master (setting/indexer clustering) The RP/SF is KO. There is no more active tasks (all complete around 12 000 tasks performed and OK) exept for 4 tasks who are waiting the RF/SF back to OK. (pending) All the indexers of both site are communicating well ones with others. Does anybody have all ready encounter this problem ? I have checked errors messages (splunkd.log) in CM / Decomissionned indexer / and other indexers and I dont find any revealant messages or errors. Is it safe to launch a rolling restart ? Or to shoud I restart splunkd on the decommissionned indexer? Thanks for any help
I want to search for an Account_Name that has the maximum number of login attempts within a span of 10 minutes with range() function.....   I don't know how can i provide the parameters to this fun... See more...
I want to search for an Account_Name that has the maximum number of login attempts within a span of 10 minutes with range() function.....   I don't know how can i provide the parameters to this function.... some help will be appreciated!
Hi, I have a search that shows the output of traffic as sum(sentbyte) This is my search, names have been changed to protect the guilty: ________________________________________________ index=ne... See more...
Hi, I have a search that shows the output of traffic as sum(sentbyte) This is my search, names have been changed to protect the guilty: ________________________________________________ index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8 | eval mytime=strftime(_time,"%Y/%m/%d %H %M") | stats sum(sentbyte) by mytime ________________________________________________ The results show the peak per minute, which I can graph with a line chart, and they range up to 10,000,000. I have tried to set up the alerting when the sum(sentbyte) is over 5,000,000 but cannot get it to trigger. My alert is set to custom: | stats sum(sentbyte) by mytime > 5000000 I me be on the wrong track for what I am trying to do but have spent many hours going in circles with this one. Any help is greatly appreciated
How to get peak TPS stats for a month with the count of all route codes ?
Hi,  I am implementing a Splunk SOAR Connector and i was wondering if it is possible to write logs at different levels. There are different levels that can be configured on SystemHealth/Debugging bu... See more...
Hi,  I am implementing a Splunk SOAR Connector and i was wondering if it is possible to write logs at different levels. There are different levels that can be configured on SystemHealth/Debugging but the BaseConnector only has debug_print and error_print methods. How can I print INFO,  WARNING and TRACE logs on my connector? Thank Eduardo
So...I have a HEC receiving JSON for phone calls using a custom sourcetype which parses calls from a field called timestampStr which looks like this: 2024-01-19 19:17:04.60313329 And uses TIME_FO... See more...
So...I have a HEC receiving JSON for phone calls using a custom sourcetype which parses calls from a field called timestampStr which looks like this: 2024-01-19 19:17:04.60313329 And uses TIME_FORMAT in the sourcetype with %Y-%m-%d %H:%M:%S.%9N And this sets _time to 2024-01-19 19:17:04.603 in the event. Which SEEMS RIGHT. However, if I then, as a user in Central time zone, ask for the calls 'in the last 15 minutes' (assuming I just made the call) it does not show up.  And, in fact, to find it, I have to use ALL_TIME (because the call seems to be 'in the future' based on my user, which feels dumb/weird to write because I know it's not technically true, but the best I can explain it). Here is what I mean (example): 1. i placed a call into the network at 1:17PM CENTRAL TIME (which is 19:17 UTC) 2. the application sent a json message of that call 3. it came in as above 4. i then ran a search in Splunk looking for that call in the LAST 15 MINUTES and it did not find it 5. however, i immediately asked for ALL_TIME and it did. I think my assumption is that if my USER SETTINGS are set to Central, it would 'correlate the calls' ok, but this appears not true or, rather, more likely, my understanding of what is going on is very poor. So I, once again, return to the community looking for answers.    
Hi Team, We have 800+ servers contains windows & Linux servers. How to get the data from Splunk with these details O/S version, Allocated Storage (GB), Utilized Storage (GB), Uptime %, CPU Utilizati... See more...
Hi Team, We have 800+ servers contains windows & Linux servers. How to get the data from Splunk with these details O/S version, Allocated Storage (GB), Utilized Storage (GB), Uptime %, CPU Utilization Peak %, CPU Utilization Avg %,  with the help of SPL query . Can you please help us on this requirement. Thanks, Raghunadha.
Can someone explain to me where the attrs argument pulls its attributes from? Originally I thought it was essentially the "-Properties" flag from Get-ADuser and I would be able to use those propertie... See more...
Can someone explain to me where the attrs argument pulls its attributes from? Originally I thought it was essentially the "-Properties" flag from Get-ADuser and I would be able to use those properties but whenever I try it says "External search command 'ldapsearch' returned error code 1. Script output = "error_message=Invalid attribute types in attrs list: PasswordExpirationDate "." Where is the attrs list? How can I define more attrs?
Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Recei... See more...
Hi, I'm new to Splunk and relatively inexperienced with DevOps topics. I have a Splunk Opentelemetry Collector deployed in the new namespace in my Kubernetes cluster. I want to configure a OTLP Receiver to collect application traces via gRPC. I used https://github.com/signalfx/splunk-otel-collector-chart to deploy the collector, I also enabled the OTLP receiver and added a new pipeline to the agent config. However, I struggle to understand how to send traces to the collector. As I see in k8s, there are many agents deployed, one for each node   $kubectl get pods --namespace splunk NAME READY STATUS RESTARTS AGE splunk-otel-collector-agent-286bf 1/1 Running 0 172m splunk-otel-collector-agent-2cp2k 1/1 Running 0 172m splunk-otel-collector-agent-2gbhh 1/1 Running 0 172m splunk-otel-collector-agent-44ts5 1/1 Running 0 172m splunk-otel-collector-agent-6ngvz 1/1 Running 0 173m splunk-otel-collector-agent-cpmtg 1/1 Running 0 172m splunk-otel-collector-agent-dfx8v 1/1 Running 0 171m splunk-otel-collector-agent-f4trw 1/1 Running 0 172m splunk-otel-collector-agent-g85cw 1/1 Running 0 172m splunk-otel-collector-agent-gz9ch 1/1 Running 0 172m splunk-otel-collector-agent-hjbmt 1/1 Running 0 172m splunk-otel-collector-agent-lttst 1/1 Running 0 172m splunk-otel-collector-agent-lzz4f 1/1 Running 0 172m splunk-otel-collector-agent-mcgc8 1/1 Running 0 173m splunk-otel-collector-agent-snqg8 1/1 Running 0 173m splunk-otel-collector-agent-t2gg8 1/1 Running 0 171m splunk-otel-collector-agent-tlsfd 1/1 Running 0 172m splunk-otel-collector-agent-tr5qg 1/1 Running 0 172m splunk-otel-collector-agent-vn2vr 1/1 Running 0 172m splunk-otel-collector-agent-xxxmr 1/1 Running 0 173m splunk-otel-collector-k8s-cluster-receiver-6b8f85b9-r5kft 1/1 Running 0 9h   I thought I need somehow send trace requests to one of this agents, but I don't see any ingresses or services deployed so that my application can use a DNS name for the collector.   $kubectl get services --namespace splunk No resources found in splunk namespace. $kubectl get ingresses --namespace splunk No resources found in splunk namespace.   Does it mean I have to add some ingresses/svcs by myself, and Splunk otel-collector helm charts don't include them? Do you have any recommendations on how I can configure this collector to be able to receive traces from applications from other pods in other namespaces using gRPC requests? It would be nice if I can have one URL that automatically gets routed to the collector agents..
Hi,   I have the below string and I'm trying to extract out the downstream status code by using this expression.  I used to do this a long time ago but it appears those brain cells have aged out. ... See more...
Hi,   I have the below string and I'm trying to extract out the downstream status code by using this expression.  I used to do this a long time ago but it appears those brain cells have aged out.   Regex that works in regex 101 but not Splunk   rex "DownstreamStatus..(?<dscode>\d+)"|stats count by dscode   String   {"ClientAddr":"blah","ClientHost":"blah","ClientPort":"50721","ClientUsername":"-","DownstreamContentSize":11,"DownstreamStatus":502,"Duration":179590376953,"OriginContentSize":11,"OriginDuration":179590108721,"OriginStatus":502,"Overhead":268232,    
Hi team, I'm trying to set up the integration between Jamf Protect and Splunk according to the steps provided in the following link: Jamf Protect Documentation - Splunk Integration When I follow t... See more...
Hi team, I'm trying to set up the integration between Jamf Protect and Splunk according to the steps provided in the following link: Jamf Protect Documentation - Splunk Integration When I follow the steps under the "Testing the Event Collector Token" heading, specifically the part that says "Using the values obtained in step 1, execute the following command:", I can see the logs sent from my local machine on the Splunk search head, but I can't see the JamfPro logs coming from other clients. However, I can see the logs when I use curl to send them. Additionally, when I open tcp dump on the heavy forwarder to check the logs, I can see that the logs are being received, but I can't see them when searching. What could be the reason for this? Furthermore, where can I check the error logs from the command line to examine any issues? Thanks