All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear All I agree that this may not be the right forum to post this. There are a lot of authentication failures for some accounts and the sources are two Linux servers. Checked with the user, they... See more...
Dear All I agree that this may not be the right forum to post this. There are a lot of authentication failures for some accounts and the sources are two Linux servers. Checked with the user, they didn't enter incorrect credentials these many times. For sure, this is some process or job. However, I would like to understand why are these attempts failing. And if these are counted as failed attempts, why these attempts don't lock out the account (considering we have an account lock-out policy) Can someone help me to understand how are these attempts generated?
How to eliminate duplicate rows before transaction command. Because of which I am getting wrong calculation. eg scenario: calculating downtime based on events Query is    index="wineven... See more...
How to eliminate duplicate rows before transaction command. Because of which I am getting wrong calculation. eg scenario: calculating downtime based on events Query is    index="winevent" host IN (abc) EventCode=6006 OR EventCode="6005" Type=Information | eval BootUptime = if(EventCode=6005,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | eval stoptime = if(EventCode=6006,strftime(_time, "%Y-%d-%m %H:%M:%S"),null()) | transaction host startswith=6006 endswith=6005 maxevents=2 | eval duration=tostring(duration,"duration") | eval time_taken = replace(duration,"(\d+)\:(\d+)\:(\d+)","\1h \2min \3sec") | rename time_taken AS Downtime | dedup Downtime, BootUptime | table host,stoptime, BootUptime, Downtime   Result is ::   host stoptime bootuptime Downtime abc 2022-30-01 10:39:25 2022-30-01 10:40:29 00h 01min 04sec abc 2022-09-01 09:27:53 2022-09-01 09:28:34 00h 00min 41sec abc 2021-28-11 10:52:52 2022-09-01 09:28:34 41d 22h 35min 42sec   in the result since i have duplicate in bootuptime the dowtime calculation is incorrect. How to get rid of this? Thanks in Advance
The original problem I am trying to fix is that I am not able to tag single events since they dont have a small enough field to use for the tags(only unique field was over 1024 chars). The solution f... See more...
The original problem I am trying to fix is that I am not able to tag single events since they dont have a small enough field to use for the tags(only unique field was over 1024 chars). The solution for this was to create on the sourcetype we care about a field that would generate sha256 values making a unique field. What i have added in the local diretory of the TA for the sourcetype: transforms.conf [add_event_hash] INGEST_EVAL = event_hash=sha256(_raw) FORMAT = event_hash::$1 WRITE_META = true props.conf [thor] TRANSFORM-event_hash = add_event_hash and fields.conf [event_hash] INDEXED = true The result after restarting Splunk and re-importing the data is that the field is successfully created with the value we want, yet the field value is not searchable. The search generates 0 results when searching for event_hash=<hash> but only generates the correct result when using event_hash=*<hash>* any assistance would be much appreciated  
I have multiple pie charts, each showing data from a different cluster. I would like to define one generic datasource that gets the cluster name as an input.  Is there a possibility to define/set v... See more...
I have multiple pie charts, each showing data from a different cluster. I would like to define one generic datasource that gets the cluster name as an input.  Is there a possibility to define/set variables within a visualization block and pass it to a datasource?    { "visualizations": { "viz_pie_chart_cluster1": { "type": "viz.pie", "dataSources": { "primary": "ds1" }, "title": "Cluster 1", "options": { "chart.showPercent": true, } # I want to pass the cluster_name=cluster1 from this vizualization }, "viz_pie_chart_cluster2": { "type": "viz.pie", "dataSources": { "primary": "ds1" }, "title": "Cluster 2", "options": { "chart.showPercent": true, } # I want to pass the cluster_name=cluster2 from this vizualization } }, "dataSources": { "ds1": { "type": "ds.search", "options": { "query": "... cluster_name=$cluster_name$ ..." }, "name": "ds1" } } }  
Hi I am trying to use Regex with the Field Extractor to extract the value of a particular field in a given piece of text, but am having a problem with the regex. The text is in the format " text ... See more...
Hi I am trying to use Regex with the Field Extractor to extract the value of a particular field in a given piece of text, but am having a problem with the regex. The text is in the format " text | message: value | more text ". So basically i need to extract the value of the field 'message' , and put it into a field named raw_message. The value of the message field can be any string. Each field/value pair in the text is separated by a pipe character, as can be seen below. I want to just extract the value of the 'message' field. All other text can be ignored. The ":" character that proceeds the field name can be ignored also. Sample text below:         | source: 10.2.2.134 | message: P-235332 | host: clmm0011.syn.local         So Regex needs to extract "P-235332" into a new field named raw_message. Can somebody help me with a Regex that would work with this? Thanks.
Hello, Our Customer lost access to support but we need to open ticket. We have name of Customer, invoice etc.... How to retreive account/password or simply transfer supported instances into my acc... See more...
Hello, Our Customer lost access to support but we need to open ticket. We have name of Customer, invoice etc.... How to retreive account/password or simply transfer supported instances into my account? Regards
I try to group by 2 fields: policy_id and client_rol but "| stats values(*) by policy_id client_rol " then the rest of fields´ values are missing. Having following table ... policy_id client_rol cl... See more...
I try to group by 2 fields: policy_id and client_rol but "| stats values(*) by policy_id client_rol " then the rest of fields´ values are missing. Having following table ... policy_id client_rol client_id client_city  001   TO  X0001   LONDON  001   AS  X0001    001   TO  X0001  LONDON  001   AS  X0001     The result I would like to get is: policy_id client_rol client_id client_city  001    TO    X0001   LONDON  001  AS  X0001      any clue guys?
I was trying to get the latest time from index=index1 sourcetype=source1 Below is the string: | tstats latest(_time) as lastTime where index=index1 sourcetype=source1 | eval lastTime=strftime(la... See more...
I was trying to get the latest time from index=index1 sourcetype=source1 Below is the string: | tstats latest(_time) as lastTime where index=index1 sourcetype=source1 | eval lastTime=strftime(lastTime,"%x %X") I will use this lastTime as the time-picker to have the table display for data inside source1. The purpose is to always get the data from last time query log in source1. Could anybody to tell me how to continue the string to make it?
Is it possible to revert the KV store storage engine migration in a standalone environment with SE 8.x.  Example: If I am migrating the KV store storage engine from MMAP to WiredTiger. Can I reve... See more...
Is it possible to revert the KV store storage engine migration in a standalone environment with SE 8.x.  Example: If I am migrating the KV store storage engine from MMAP to WiredTiger. Can I revert this change i.e. migrate from WireTiger to MMAP.   If it is possible what are the steps to do so. Is there any doc for this? I can see doc/command for migrating from MMAP to WiredTiger  splunk migrate kvstore-storage-engine --target-engine wiredTiger Need similar steps for the reverse condition. Please help.
Hello, Please I need your help,  I have a dedup with a conditional. It happens that I have this table where when the technician enters the reason for its technical service is saved in splunk its ... See more...
Hello, Please I need your help,  I have a dedup with a conditional. It happens that I have this table where when the technician enters the reason for its technical service is saved in splunk its previous value and the new change. I need to delete the repeated rows and only keep the values that have a reason written by the technician.  
Hello,  I need your help please, it happens that I have this table where when the technician enters the reason for its technical service is saved in splunk its previous value and the new change. I ... See more...
Hello,  I need your help please, it happens that I have this table where when the technician enters the reason for its technical service is saved in splunk its previous value and the new change. I need to delete the repeated rows and only keep the values that have a reason written by the technician.    
Hi, all! I have one existing field which is CHECKPOINT_ID from my table 1 and another csv file which contains an interpretation of CHECKPOINT_ID.  I want to add a new column of GIVR_CALLFLOW_DEFINE... See more...
Hi, all! I have one existing field which is CHECKPOINT_ID from my table 1 and another csv file which contains an interpretation of CHECKPOINT_ID.  I want to add a new column of GIVR_CALLFLOW_DEFINED_CHKPNT to my table 1 by using lookup!   Here is the table 1 Here's the csv file:    
Hi, all! How could I separate this filed into many other fields which is formed by 4 characters? For example: The original value                   31381012204777027704 3138 1012 ... See more...
Hi, all! How could I separate this filed into many other fields which is formed by 4 characters? For example: The original value                   31381012204777027704 3138 1012 2047 7702 7704         3138111620941002204720387701W019 3138 1116 2094 1002 2047 2038 7701 W019                        
Hi Guys, I have a string say example : abc this string I want to lookup and match the presence in a lookup table  | lookuptable test.csv test.csv has value Number  Value 1 xyz 2 abc ... See more...
Hi Guys, I have a string say example : abc this string I want to lookup and match the presence in a lookup table  | lookuptable test.csv test.csv has value Number  Value 1 xyz 2 abc 3 mnp 4 wgf   I want to check the presence of my search string abc in lookup table and shows me yes or no in result table Like if found in lookuptable should result me as Yes else NO example abc is present in lookuptable so my output should be Search string Presence abc Yes   my search string abc | inputlookup test.csv| table value | rename value AS V1 | eval x="searchstring" | eval y="v1" | eval match=if(match(x,y),1,0) | where match=1 | table Searchstring, Yes   I tried this but didnt get result Kindly help me ! Thanks in advance
Hello! The CSS code below works when I put it directly in my dashboard but not with an external sheet.     <panel depends="$visible$"> <html> <style> div[data-test-panel-id^='relative'], div[... See more...
Hello! The CSS code below works when I put it directly in my dashboard but not with an external sheet.     <panel depends="$visible$"> <html> <style> div[data-test-panel-id^='relative'], div[data-test-panel-id^='realTime'], div[data-test-panel-id^='date'], div[data-test-panel-id^='dateTime'], div[data-test-column-id^='past'], div[data-test-panel-id^='advanced'], div[data-test^='real-time-column'] { display: none; } </style> </html> </panel>     I call the sheet like this:     <form stylesheet="time.css">     What is wrong please?
Hello, everyone! I need help. I configured DB connect app on heavy forwarder and connected database input. I can view DB logs on Heavy forwarder, but I want to forward this data to indexers. ... See more...
Hello, everyone! I need help. I configured DB connect app on heavy forwarder and connected database input. I can view DB logs on Heavy forwarder, but I want to forward this data to indexers. How can I configure it?
We are receiving a log from the host(host=abc) and we have one interesting field named Ip_Address. In this field, we have multiple IP's and event is indexing for each 5 min of an interval like(Ping ... See more...
We are receiving a log from the host(host=abc) and we have one interesting field named Ip_Address. In this field, we have multiple IP's and event is indexing for each 5 min of an interval like(Ping success for Ip_Address=10.10.101.10 OR Ping failed for Ip_Address=10.10.101.10).   FYI, if I am getting events like(1:00pm ping failed and 1:05pm ping success) in this case we are not considering as failed percentage. So, basically if count of failure is more than one time(means Continuously like 1:00pm ping failed and 1:05pm ping failed ) then only it will be considered as failure.   I am using below query to calculate the success and failed percentage of all ip's for an interval of time like 1 month or something but it is not fulfilling my requirement as I want to achieve for all ip's in a single query. It will be more useful if it shows in the dashboard visualization. index=unix sourcetype=ping_log "Ping failed for Ip_Address=10.101.101.14" (earliest="01/04/2022:07:00:00" latest="1/07/2022:18:00:00") OR (earliest="01/10/2022:07:00:00" latest="1/14/2022:18:00:00")OR (earliest="01/17/2022:07:00:00" latest="1/21/2022:18:00:00")OR (earliest="01/31/2022:07:00:00" latest="1/31/2022:18:00:00") | timechart span=600s count | where count=2 | stats count | eval failed_min=count*10 | eval total=failed_min/9900*100,SLA=100-total,Ip_Address="10.101.101.14" | rename SLA as Success_Percent | table Success_Percent Ip_Address  
All,  I built a previous TA and upgrades worked fine in the past. My recent TA build with AOB 4.0 has an issue where the the modular input passwords in password.conf are all erased and set to *****... See more...
All,  I built a previous TA and upgrades worked fine in the past. My recent TA build with AOB 4.0 has an issue where the the modular input passwords in password.conf are all erased and set to ******** (exactly 8). I have tried to debug this every possible way I could. Has anyone seen an issue where passwords were reset with all asterisks? I know from the the logs that this occurs immediately after the upgrade but the logs don't shed light on why the reset occurs.       clear_password {"api_key": "********"}   I am ripping my hair out and I can't seem to figure why this is happening. Once I upgrade and try to upgrade to different build issue no longer occurs.   
It occurs 3-4 hours after starting splunk.  These are the logs found in splunk/var/log/splunk/mongod.log I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current ... See more...
It occurs 3-4 hours after starting splunk.  These are the logs found in splunk/var/log/splunk/mongod.log I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... I REPL [signalProcessingThread] shutting down replication subsystems I REPL [signalProcessingThread] Stopping replication reporter thread I REPL [signalProcessingThread] Stopping replication fetcher thread I REPL [signalProcessingThread] Stopping replication applier thread    
Good afternoon Guru's, I just was put into a position to teach myself how to splunk. I don't have experience with this kind of query type language and it's bringing me to my knees. Here's my query.... See more...
Good afternoon Guru's, I just was put into a position to teach myself how to splunk. I don't have experience with this kind of query type language and it's bringing me to my knees. Here's my query...there is a selected index and everything works perfectly except when I add in a simple division statement...then it says the query is malformed but pretty sure that's not the case at all: I'm trying to get the percentage of events that the response_time is greater than 2 standard deviations:   index="myIndex" | eventstats avg(response_time) as Average_Response_Time stdev(response_time) as Standard_Deviation count(response_time) as Total_Count | eval calc = Average_Response_Time+(2*Standard_Deviation) | eval 2xStd = if(response_time>calc, 1, 0) | eventstats sum(2xStd) as 2times | eval percent = 2times/Total_Count | table response_time Average_Response_Time Standard_Deviation