All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Can you please assist with the query to get the greatest value (for one field) on that day and graph the data for the week  I am currently using the below one and it gives me only the greatest ... See more...
Hi, Can you please assist with the query to get the greatest value (for one field) on that day and graph the data for the week  I am currently using the below one and it gives me only the greatest value for the week in the graph index="index_name" | search STATS_NAME=BIRTHDAY | chart values(STATS_COUNT) by _time  
Hi Team   Like how Splunk Enterprise software pricing is based on the amount of data indexed is there any pricing involved for add-ons in Splunk. Thanks
Hi All, Is there any  cost involved for splunk addons or is the cost involved only for data indexing into the splunk?   Regards, Madhusri R
Hello! I ran out of memory for the search head located in the cluster. The status is "AutomaticDetention". Is it possible to somehow transfer data to another SH
Hi Team,   I have installed Jira Issue collector addon for Splunk inorder to integrate JIRA and Splunk so that i can fetch JIRA logs into Splunk  Can someone help me with the pricing  i) If  fr... See more...
Hi Team,   I have installed Jira Issue collector addon for Splunk inorder to integrate JIRA and Splunk so that i can fetch JIRA logs into Splunk  Can someone help me with the pricing  i) If  free trial  for how many days can we use ii) if Licensed version then what would be the cost   Thanks @Daniel Astillero
hello Splunkers! I've got an issue with this query, in "main search" I got data src, can I use "src" to get data on my "second search". later on, the final result ignored from "main search " anyon... See more...
hello Splunkers! I've got an issue with this query, in "main search" I got data src, can I use "src" to get data on my "second search". later on, the final result ignored from "main search " anyone can help me? thanks, index=VPN | table src -> main search [search index=firewall | table src dest_ip] -> second search | table src dest_ip  
I have followed the following already, but I keep getting "no results found". What am I missing please?   How do I generate a meta Woot license? Go to Settings --> "Searches, reports and... See more...
I have followed the following already, but I keep getting "no results found". What am I missing please?   How do I generate a meta Woot license? Go to Settings --> "Searches, reports and alerts" and enable the "Generate Meta Woot Server GUID Lookup" search and then RUN THE SEARCH (click on run, under actions), to initially generate the lookup 4. Go to Settings --> "Data models" and enable the acceleration for the Meta Woot License Usage data model.
Hi, It seems I am not able to change my last name. The support@splunk.com is no longer usable to create a case.  I do not have an active license and I wish to change my last name. What should I do?
Good Afternoon, I'm new to Splunk. They just set up the Splunk in my environment and as an analyst, I need to know it in other to perform my daily activities. Please, where can I begin learning Spl... See more...
Good Afternoon, I'm new to Splunk. They just set up the Splunk in my environment and as an analyst, I need to know it in other to perform my daily activities. Please, where can I begin learning Splunk in other to be a superuser and be able to be writing commands or queries? Thanks, Bus.
Has anyone tested 'streamfwd' for ipv6 ? .. /opt/splunkforwarder/etc/apps/Splunk_TA_stream/linux_x86_64/bin/streamfwd [streamfwd] logConfig = streamfwdlog.conf port = 8889 maxEventQueueSize = 6... See more...
Has anyone tested 'streamfwd' for ipv6 ? .. /opt/splunkforwarder/etc/apps/Splunk_TA_stream/linux_x86_64/bin/streamfwd [streamfwd] logConfig = streamfwdlog.conf port = 8889 maxEventQueueSize = 6000 netflowReceiver.0.ip = xxx.xxx.xxx.xx netflowReceiver.0.port = 30020 netflowReceiver.0.decoder = netflow netflowReceiver.0.decodingThreads = 4   thanks  
How do I access an app installed for example from SH that is installed on the Cluster Master or LM? I have an app installed on my Cluster Master & need to access it from another server ?
Hi Everyone, I am using below query: index=abc  ns=blazegateway|stats count by app_name|eval f1="hg" I am getting result as : app_name                     count                  f1 abc          ... See more...
Hi Everyone, I am using below query: index=abc  ns=blazegateway|stats count by app_name|eval f1="hg" I am getting result as : app_name                     count                  f1 abc                                       1                          hg bcd                                          2                         hg My requirement is for my column f1 I am getting hg in both rows I want some other name in 2nd row What changes I should do in my query
Dear Community Experts, Need your urgent help on below error that I am getting when trying to run the below curl command,  search="search index=perfmon_idx host=* `M_Performance(Perfmon:CPU Load,% ... See more...
Dear Community Experts, Need your urgent help on below error that I am getting when trying to run the below curl command,  search="search index=perfmon_idx host=* `M_Performance(Perfmon:CPU Load,% Processor Time)` instance=_Total | timechart avg(Value) by host | eval warning_threshold = 70 | eval critical_threshold = 90" -d output_mode=json -d earliest_time="-60m@m" -d latest_time="-0m@m" -o C:\CPULog.txt Error in 'SearchParser': Missing a closing tick mark for macro expansion. Can someone please help me to understand what is missing here ?        
I am not putting this question in any of the specific agent forums because I am asking a general question. Do we need to restart a client app or JVM or .net coordinator service, or our DB agent... w... See more...
I am not putting this question in any of the specific agent forums because I am asking a general question. Do we need to restart a client app or JVM or .net coordinator service, or our DB agent... when changing the agent log levels? I cannot find a clear answer in the docs. I am assuming we don't have to do any of this because we can request agent log files with a certain log level from the GUI - without restarts or anything. But I would like someone to confirm, please. I don't have a lab environment to test this by myself, that's why.
I do have a CSV file that consist of below column EventName Start Time Username severity alertid The data on the alertid became a list when user assigned multiple alert.  Challenge: How to ... See more...
I do have a CSV file that consist of below column EventName Start Time Username severity alertid The data on the alertid became a list when user assigned multiple alert.  Challenge: How to separate the list from alertid, create a new entry each and copy the same value of the remaining column. Below are the sample entry of CSV file. Event Name,Start Time,Username,severity,alertid "alert assigned","1617229938497","sampleuser","5","82574,82573,82572,82569,82568,82567" ------------------ Desired result. Event Name,Start Time,Username,severity,alertid "alert assigned","1617229938497","sampleuser","5","82574" "alert assigned","1617229938497","sampleuser","5","82573" "alert assigned","1617229938497","sampleuser","5","82572" "alert assigned","1617229938497","sampleuser","5","82569" "alert assigned","1617229938497","sampleuser","5","82568" "alert assigned","1617229938497","sampleuser","5","82567"
Hello, I have Splunk 8.0.4. I tried to send HTTP events from my browser to my index with HEC. The requests are denied because of CORS error.  I would like to get some help for the situation, than... See more...
Hello, I have Splunk 8.0.4. I tried to send HTTP events from my browser to my index with HEC. The requests are denied because of CORS error.  I would like to get some help for the situation, thanks.
Hello, i have syslog-ng running and got all my syslog messages from my access points and cisco switches to the same directory. But the access points should go to another index as the switch logs. s... See more...
Hello, i have syslog-ng running and got all my syslog messages from my access points and cisco switches to the same directory. But the access points should go to another index as the switch logs. so i created to monitor stanzas, but the second stanza doesnt work.   #log cisco switches [monitor:///var/syslog/logavaya/*/*.log] host_segment = 4 disabled = false index = cisco sourcetype = syslog blacklist = \d-\d\d\.kuechen\.de\.log$ #log avaya access points [monitor:///var/syslog/logavaya/*/./*.log] host_segment = 4 disabled = false index = avaya sourcetype = avaya:ap whitelist = \d-\d\d\.kuechen\.de\.log$   The question is, how can i input all files into two index with different sourcetypes?
Hi I have created a summary index from an existing index using tstats but, when I try to use tstats directly on the data in the summary index it doesn't work, I can only using stats is there a rea... See more...
Hi I have created a summary index from an existing index using tstats but, when I try to use tstats directly on the data in the summary index it doesn't work, I can only using stats is there a reason or a workaround?
Hi, We have data which will display total tarnsactions and success transactions count based on our time range.But we got one requirement that when some incident happen that time the count we are get... See more...
Hi, We have data which will display total tarnsactions and success transactions count based on our time range.But we got one requirement that when some incident happen that time the count we are getting very less compared to other days.So our target is whenever we got that type less transactions we need to average for last 3 days total transactions and display that number instead of less transaction count.Please help how to solve this. Sample Data: Date Total_tranasactions Success_tranasactions 01-04-2021    80 70 02-04-2021    75 70 03-04-2021    100 90 04-04-2021    10 2   Expected Output: Date Total_tranasactions Success_tranasactions 01-04-2021    80 70 02-04-2021    75 70 03-04-2021    100 90 04-04-2021   avg of last 3 days 2
I'm currently indexing a JSON payload that looks like this (snippet): "data":[{"dimensions":["HTTP_CHECK-F009EA2B6AA8E2C0","SYNTHETIC_LOCATION-833A207E28766E49"],"dimensionMap":{"dt.entity.synthetic... See more...
I'm currently indexing a JSON payload that looks like this (snippet): "data":[{"dimensions":["HTTP_CHECK-F009EA2B6AA8E2C0","SYNTHETIC_LOCATION-833A207E28766E49"],"dimensionMap":{"dt.entity.synthetic_location":"SYNTHETIC_LOCATION-833A207E28766E49","dt.entity.http_check":"HTTP_CHECK-F009EA2B6AA8E2C0"},"timestamps":[1617467520000],"values":[186]},{"dimensions":["HTTP_CHECK-F06A1F4F9C3252AD","SYNTHETIC_LOCATION-1D85D445F05E239A"],"dimensionMap":{"dt.entity.synthetic_location":"SYNTHETIC_LOCATION-1D85D445F05E239A","dt.entity.http_check":"HTTP_CHECK-F06A1F4F9C3252AD"},"timestamps":[1617467520000],"values":[187]},{"dimensions":["HTTP_CHECK-F06A1F4F9C3252AD","SYNTHETIC_LOCATION-833A207E28766E49"],"dimensionMap":{"dt.entity.synthetic_location":"SYNTHETIC_LOCATION-833A207E28766E49","dt.entity.http_check":"HTTP_CHECK-F06A1F4F9C3252AD"},"timestamps":[1617467520000],"values":[188]} This is being collected by a REST API modular input, and is assigned to a specific sourcetype called "smoketest_json_dyn_tcp". Similar inputs are configured with unique sourcetype names; they are making REST calls to the same destination to collect different metrics. Since the same field names are being returned by the various calls, it makes for quite a conundrum when I'm trying to sort out what value belongs to what metric. The conventional way of assigning field names via extraction doesn't work, as only the first occurrence of the field/value pair is returned; as noted in my sample data, more than one occurrence exists. To make my life easier, I'd like to assign unique field names to the values during index time, using props.conf and transforms.conf. This is what I have in place currently: props.conf: [smoketest_json_dyn_tcp] #TZ = US/Eastern #TZ = EST5EDT INDEXED_EXTRACTIONS = json KV_MODE = none DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = false TRUNCATE = 200000 REPORT-mv_jdt = mv_jdt transforms.conf: [mv_jdt] REGEX = \"dt.entity.synthetic_location\":\"(\w+)\",\"dt.entity.http_check\":\"(\ w+)\",\"timestamps\":\[(\d+)\],\"values\":\[(\d+)\] FORMAT = testLocation::$1 testName::$2 unixTimeStamp::$3 TCPconnectTime::$4 MV_ADD = true REPEAT_MATCH = true Unfortunately, this is not working for me. I've also tried the following in transforms.conf... [mv_jdt] REGEX = \"dt.entity.synthetic_location\":\"(?<testLocation>\w+)\",\"dt.entity.h ttp_check\":\"(?<testName>\w+)\",\"timestamps\":\[(?<unixTimeStamp>\d+)\],\"valu es\":\[(?<TCPconnectTime>\d+)\] MV_ADD = true REPEAT_MATCH = true ...but still no luck. Is what I'm attempting possible? If so, what am I missing in my stanzas? Thank you for any assistance provided!