All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a SH that is not part of SH Cluster.  The SH is connected to an Index Cluster.  I am seeing the following errors on the Indexers (W.X.Y.Z is the IP address of the SH) ERROR TcpInputProc [231... See more...
I have a SH that is not part of SH Cluster.  The SH is connected to an Index Cluster.  I am seeing the following errors on the Indexers (W.X.Y.Z is the IP address of the SH) ERROR TcpInputProc [2317 FwdDataReceiverThread] - Error encountered for connection from src=W.X.Y.Z:46788. error:140760FC:SSLroutines:SSL23_GET_CLIENT_HELLO:unknown protocol I don't think there is a mismatch of sslVersions.   Please help me troubleshoot this.  
Hi  i am unable to display lable or any result need to display chart area instead of default lables for splunk pie charts  
Hello All, I have been using SimpleXML to create some nice dashboards - with decent visualizations. SimpleXML can: Show or hide panels using tokens, use the same panel but run different searches ... See more...
Hello All, I have been using SimpleXML to create some nice dashboards - with decent visualizations. SimpleXML can: Show or hide panels using tokens, use the same panel but run different searches based on a token, make font size bigger or smaller with css, and use javascript and jquery as well. And use css to highlite text in a string, not just a single value, which seems the only choice in Dashboard Studio - change color on a "match", etc. It seems that I cannot do some or most of those options with Dashboard Studio. Granted, I have just started using Dashboard Studio. I would like to be able to hide or show a table depending on a token, and change color of column in a table. And perhaps us a single table for different searches depending on a token or search result. Most of the examples I have seen for Dashboard Studio depend on metrics, numbers, or counts, etc. The lion's share of my searches are all based on the linux_secure data source and the NIX addon. I am not counting sales, amounts, or items as shown in the Splunk demo for the "Butter Cup" store. I am open to suggestions. It seems to me that the Splunk Dashboard Studio as limitations when compared to SimpleXML. I am open to guidance or suggestions, eholz1
Hello Splunk masters I am trying to figure out how to get a rate (percent) by looking at two strings within a column, then dividing by values in another column Sample data below.  What I'm trying... See more...
Hello Splunk masters I am trying to figure out how to get a rate (percent) by looking at two strings within a column, then dividing by values in another column Sample data below.  What I'm trying to do is calculate the rate of "incomplete" by batch week.  Rate is calculated by taking the batch week, getting the total = (complete + incomplete) / incomplete.  As shown below, I included a sample of what I'd like to get as a final output.  This is way beyond my Splunk-fu and hoping someone can help me out here.   Thanks for the help in advanced       Sample Data site batch_status batch_week status_count 2506 complete 16 7 2506 incomplete 16 4 2506 complete 17 5 2506 incomplete 17 3 2506 complete 18 2 2506 incomplete 18 4 What I'd like to get back 2506 incomplete 16 36% 2506 incomplete 17 38% 2506 incomplete 18 -66%        
not sure what it looks like on the Unix platform but in the Web UI on a Windows Server there is no separate square aka pane for the Deployer Server as in the Search Head Cluster software configuratio... See more...
not sure what it looks like on the Unix platform but in the Web UI on a Windows Server there is no separate square aka pane for the Deployer Server as in the Search Head Cluster software configuration Deployer on the Overview screen, although it is clearly assigned the Deployer role in the Monitoring Console and that is the only role for that Instance (I like to follow the guidelines to the T)  When you click on Topology it shows up under the Other category along with the Cluster Master and Deployment servers  All I'm saying is that it would be nice to be able to see it's performance metrics  on the Overview page along with all its other brother and sister servers 
Hello Splunkers, I have a quick question, Is this possible to simply extract the content of a journal.zst file ? Is it encrypted in some way or should I be able to retrieve the whole raw data out ... See more...
Hello Splunkers, I have a quick question, Is this possible to simply extract the content of a journal.zst file ? Is it encrypted in some way or should I be able to retrieve the whole raw data out of it ? The journal file is located here : <index_path>/db/hot_<whatever>/rawdata/journal.zst Thanks a lot, GaetanVP
Hi,   I'm having some architecture deployment issues on an indexer. When I check hosts using (index="_internal"  | stats count by host). I double-checked my outputs.conf on all my instances, I al... See more...
Hi,   I'm having some architecture deployment issues on an indexer. When I check hosts using (index="_internal"  | stats count by host). I double-checked my outputs.conf on all my instances, I also checked the inputs.conf on indexer2 both are set to 9997 and I believe I have the right local IP running on all my instances but I still don't have any other host pop-up on Indexer2 except Indexer2. Any assistance would be highly appreciated. Side note I'm using AWS to practice setting up my own Architecture for learning purposes any help will be appreciated.   Regards,
I tried official documents and community searches but couldn't find out how to reverse y-axis. not transpose or xy-swap!
Hi, I am trying to use Dashboard Studio to show transactions response times a Single Value with a Sparkline. I have found that Single Value in Dashboard Studio seems to struggle when timechart se... See more...
Hi, I am trying to use Dashboard Studio to show transactions response times a Single Value with a Sparkline. I have found that Single Value in Dashboard Studio seems to struggle when timechart search returns no value for a particular time interval. For instance, if the end of my timechart query has a small 'span=....' value [1],  Splunk Search shows me there are intervals where no aggregate response time is calculated [2] and I get a "missing property: majorValue" message in the Single Value Visualisation window/panel in Dashboard Studio. If I have a large 'span=...' value, splunk search shows me ALL intervals do have an aggregate response time calculated and Dashboard Studio works as expected.  [1] | timechart span=10s P95(response_time_duration_mssec) as RT_P95 [2]  _time RT_P95 2023-01-03 10:59:50 940.000 2023-01-03 11:00:00 999.000 2023-01-03 11:00:10 946.000 2023-01-03 11:00:20 1164.500 2023-01-03 11:00:30 844.250 2023-01-03 11:00:40   2023-01-03 11:00:50 1108.500 2023-01-03 11:01:00 1290.200 2023-01-03 11:01:10 1555.000 (curtailed) I'd rather not hardcode 'span=1min' into my dashboard as I have no way of guaranteeing this problem won't recur at times of low usage. This seems like a bug in Dashboard Studio to me. Please advise & TIA.
can the prompt block support optional inputs from a user?
Hello! If I have this: Letter Number A 1 A 2 A 3 B 1 B 2   is there a way to get this:   Letter Number A 1   2 ... See more...
Hello! If I have this: Letter Number A 1 A 2 A 3 B 1 B 2   is there a way to get this:   Letter Number A 1   2   3 B 1   2   so that the transition from one set of related rows to the next is clear?    
Hi Team,   I have a requirement to integrate Phantom with SNOW. Now the challenge is in SNOW I require some extra fields to be populated example: Incident Category field should be always Crime an... See more...
Hi Team,   I have a requirement to integrate Phantom with SNOW. Now the challenge is in SNOW I require some extra fields to be populated example: Incident Category field should be always Crime and Assignment Group should be always a particular team Service Now queue name. By using the current App in Splunk Phantom I dont know how to set my required fields.  Kindly suggest.
Hi All, Is it possible to monitor AWS services belongs to multiple AWS accounts under same organization  in Splunk Observability Cloud integration for Infrastructure Monitoring.  Regards, Ankur
I want to get the last index of my target value for a multi-value field. For example,  id chain 1 SendMessage CheckMessage PayForIt 2 CheckMessage SendMessage CheckMessage PayFor... See more...
I want to get the last index of my target value for a multi-value field. For example,  id chain 1 SendMessage CheckMessage PayForIt 2 CheckMessage SendMessage CheckMessage PayForIt 3 PayForIt SendMessage CheckMessage PayForIt 4 SendMessage PayForIt CheckMessage   If "PayForIt" appears, meanwhile  "SendMessage" and "CheckMessage" appears before it, this is a normal event. But if "SendMessage" or "CheckMessage" don't appear, or after "PayForIt", it is a abnormal event. It means you must send message and be verified by SMS before you pay for something! The id 1, 2 and 3 above are normal, 4 is abnormal.  I've tried mvfind like below, but it will treat 2, 3 as abnormal event! eval send=mvfind(chain,"SendMessage") | eval check=mvfind(chain,"CheckMessage") | eval pay=mvfind(chain,"PayForIt") | where isnotnull(pay) and isnotnull(check) and isnotnull(send) and pay>check and check>send
I have data with multiple date fields in GMT time. when I import the data with setting the TZ=Europe/Berlin, I see that the _time in the correct time zone but for other date fields are still in GMT t... See more...
I have data with multiple date fields in GMT time. when I import the data with setting the TZ=Europe/Berlin, I see that the _time in the correct time zone but for other date fields are still in GMT time.     props.conf: [machine_log] BREAK_ONLY_BEFORE_DATE=null CHARSET=UTF-8 FIELD_DELIMITER = , FIELD_NAMES = DB_ID, DateOn, DateHist, DateOff, ExportTime, Item, Machine, Section TIMESTAMP_FIELDS = DateOn, DateHist, DateOff, ExportTime INDEXED_EXTRACTIONS=csv KV_MODE=none LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true SHOULD_LINEMERGE=false TZ=Europe/Berlin category=Structured disabled=false pulldown_type=true       I'm still getting data in this way: _time DB_ID DateOn DateHist DateOff ExportTime Item Machine Section 2023-01-03 12:42:38.787 B0123 2023-01-03 11:41:52.897 2023-01-03 11:42:38.787 2023-01-03 11:42:38.787 2023-01-03 11:42:38.787 I01 M01 S01 2023-01-03 12:41:43.847 B0223 2023-01-03 11:40:18.800 2023-01-03 11:41:43.847 2023-01-03 11:41:43.847 2023-01-03 11:41:43.847 I12 MD1 S02   index time in the correct time, but all date fields in the original timing with one hour offset. The question is: How to change all date fields with the correct time zone?   Thanks in advance!
Hi, I have a doubt in the Splunk cloud. In the data inputs section, there is no option as 'rest' to use API for data import. Can you please help me with this? Thanks
Can we use a lookup table to replace in the case like statements?  For Example:           index="example"| eval ErrorMessage=case(like(Errormessage,"%Bad Connection%"),"Check Your Connection",    ... See more...
Can we use a lookup table to replace in the case like statements?  For Example:           index="example"| eval ErrorMessage=case(like(Errormessage,"%Bad Connection%"),"Check Your Connection",               like(Errormessage,"%Wrong Password%"),"Please check the password",               like (Errormessage,"%Invalid code%"),"Please enter a valid code", 1=1,"NEW Error") When the query is like this can we use look up table where if we found the message in A column which contains % wildcards we need to replace it with the one in the B column which has the fixed value?
I'm trying to come up with a Splunk search query that I can use to find when customers have first attempted to log in. We often get call outs regarding credential stuffing attacks, where 100's of acc... See more...
I'm trying to come up with a Splunk search query that I can use to find when customers have first attempted to log in. We often get call outs regarding credential stuffing attacks, where 100's of accounts have attempted to log in, and part of my analysis is finding when these accounts first attempted to log in. At the moment I've got this   index=keycloak | sort time | streamstats first(time) as first_login by username | dedup username | table username, first_login   The usernames are on display, but the 'first_login' column is empty
Hello, I have let's say "inherited" a few searches and try to understand them. here is the search:   | lookup lu_cisco_umbrella_top_1_million domain as ut_domain output domain as cisco_umbrella_do... See more...
Hello, I have let's say "inherited" a few searches and try to understand them. here is the search:   | lookup lu_cisco_umbrella_top_1_million domain as ut_domain output domain as cisco_umbrella_domain | lookup lu_majestic_top_1_million domain as ut_domain output domain as majestic_domain | search cisco_umbrella_domain=no_matches AND majestic_domain=no_matches     where I stumble uppon now is I understand how usually I could match domain to a field in my search results and in the output match another one to another field (like in the splunk lookup examples documentation) but  I can't wrap my head around how this could work with the same field twice? I also don't see the fields majestic_domain or cisco_umbrella_domain in the previous search results so how could this possibly work?  also, is there a way to take those two lookups and transfer it into something like this:      .... (NOT [|inputlookup lu_majestic_top_1_million domain |table domain ]) AND (NOT [|inputlookup lu_cisco_umbrella_top_1_million |table domain ])...     I bet I work in the totally false direction seemingly so I hope someone can help tnx a lot
Hello, i'm trying to add values to an existing field but i'm running into a wall. I have a field name vector and another field name source, the pattern of the source field is XXX - YYY - ZZZ, i o... See more...
Hello, i'm trying to add values to an existing field but i'm running into a wall. I have a field name vector and another field name source, the pattern of the source field is XXX - YYY - ZZZ, i only want to keep the XXX part, so i've done     | eval temp = mvindex(split(source, " - "),0)     then i try to add these result to the vector field like this :     |eval vector = vector + temp     but it doesn't work.   Can you help me ?