All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

could someone please help me to convert the time format. time: Thu jul 20 18:49:57 2020  (string type) i'm trying to get 2020-07-20 18:49:57 i want final result to  get diff between two dates , li... See more...
could someone please help me to convert the time format. time: Thu jul 20 18:49:57 2020  (string type) i'm trying to get 2020-07-20 18:49:57 i want final result to  get diff between two dates , like  2020-07-20 18:49:57 -  2020-07-21 18:49:57 (in days)
Hello, Im no longer able to retrieve historical data from inputlookup incident_review_lookup . When i check the lookup (| inputlookup incident_review_lookup) , i dont see the old data , just data th... See more...
Hello, Im no longer able to retrieve historical data from inputlookup incident_review_lookup . When i check the lookup (| inputlookup incident_review_lookup) , i dont see the old data , just data that recently generated.  anyone knows the generating search to rebuild  lookup with the data from 3-6 months ago ( i think old data is in the `notable`) .  This issue was caused after restoring the KV store from a backup  Thanks.
I have an API that logs the start and end of each request. What I'd like to make sure I'm monitoring is the requests that start but, for whatever reason, don't finish. Should never happen, but I'd li... See more...
I have an API that logs the start and end of each request. What I'd like to make sure I'm monitoring is the requests that start but, for whatever reason, don't finish. Should never happen, but I'd like to be sure I can identify them just in case. The start of a request looks like this:   ... line.component="gateway" line.message="request received" ...   The end looks like this:   ... line.component="gateway" line.message="request completed" ... There is a request-id value logged for each request that traces a given request through the app. What I guess I'm looking for is a left join that counts any null completed values. I've been playing with the "join type=left request-id" syntax, but haven't found the thing that gets me where i need. Any ideas?    
I tried to enable some use cases from Splunk ESCU and then I copied SPL command and run searching to test.  It seems that some use cases show error due to MLTK.   Any idea how to solve this?     I u... See more...
I tried to enable some use cases from Splunk ESCU and then I copied SPL command and run searching to test.  It seems that some use cases show error due to MLTK.   Any idea how to solve this?     I use Splunk Core 8.0.4 with ES 6.2.0
I want to keep updating new records to Splunk lookup table and not writing records again for existing users, even if they come in search results. Lookup table structure : apiCallerID, ticketId My b... See more...
I want to keep updating new records to Splunk lookup table and not writing records again for existing users, even if they come in search results. Lookup table structure : apiCallerID, ticketId My base search query is having mvexpand, mvindex & rex commands and then table so when I do "NOT" it does not allow me to any other way or corrections. index=test sourcetype=stats earliest=-5m@m latest=-0m@m| eval temp=ltrim(Response,"[") | eval temp1=rtrim(temp, "]") | eval temp2=split(temp1,"}")| mvexpand temp2| eval temp3=ltrim(temp2,",")  | eval testData=mvindex(temp3,-1) | rex field= testData "userID:\s(?<apiCallerID>\d+)" | table apiCallerID  NOT [|inputlookup LookuptableGeneratorForDSIDByTestID.csv| fields apiCallerID] Basically if the user is not there I want to add a ticket id along with user in lookup table and in future system will use this to raise any new tickets and prevent duplicate tickets for the existing user.  
Hi In  known issues this problem is listed (STREAM-4301, STREAM-4409 https://docs.splunk.com/Documentation/StreamApp/7.2.0/ReleaseNotes/Knownissues The proposed workaround is unclear: Workarou... See more...
Hi In  known issues this problem is listed (STREAM-4301, STREAM-4409 https://docs.splunk.com/Documentation/StreamApp/7.2.0/ReleaseNotes/Knownissues The proposed workaround is unclear: Workaround: Manually re-configure streams for the forwarder to resume or restart Splunk Forwarder service in Windows   My question. How do we manually reconfigure streams for the forwarder to resume. Hopefully, this is a workaround for an automated recovery from this error. 
Hi all, I have the Splunk_TA_windows and i noticed that there are multiple transforms-extract for field named src.  For example: - Client_Address_as_src - Source_Network_Address_as_src I am wond... See more...
Hi all, I have the Splunk_TA_windows and i noticed that there are multiple transforms-extract for field named src.  For example: - Client_Address_as_src - Source_Network_Address_as_src I am wondering what value the filed "src" will have if there are values for two or more of the source fields. I need this because the Authentication DM has this field, and i want to know what the values that it uses.  Thanks
Hello all, My latest challenge is to ingest a Word doc into our environment.  According to everything I have read so far, this should be straight forward as Splunk can ingest 'any' file.  At this po... See more...
Hello all, My latest challenge is to ingest a Word doc into our environment.  According to everything I have read so far, this should be straight forward as Splunk can ingest 'any' file.  At this point I should point out that I am not concerned about the contents of the file (as this all needs to be obfuscated).  I only need to ingest the file to get its name.  I am not concerned about whether or not Splunk can read the 'Word' type formatting. The file is created daily with the format - "My Word Doc ddmmyyyy hh mm.doc" I am only interested in the "ddmmyyyy hh mm" part to ensure that it has been created today. I cannot get the doc file to ingest at all.  Not even in an unformatted state.  If I save the file as a ".txt" file, then it is ingested.  Unfortunately, the 'save as' option is not an option in production. I have tried using 'whitelist=' option without any success. Can anyone suggest a solution?  Is there something in my installation that is stopping Word docs from being ingested?  Has anyone else had a similar experience?   Thanks
1) I am using below code to getting all machines list ,   | metadata type=host index=* | stats count by host     is possible to get IP address also ?  2)  Here also I need IP address required, ... See more...
1) I am using below code to getting all machines list ,   | metadata type=host index=* | stats count by host     is possible to get IP address also ?  2)  Here also I need IP address required, index=windeventlog sourcetype=winEventLog:Security EventCode=4625 | stats count by Account_Name, EventCode, Workstation_Name | cort by - count   please suggest.    Thanks in advance. 
hello, I Have a machine Windows server 2012 r2, I configure as Active directory, and I create a user (user_1, user_2) and I add a list of computers (Client_1, Client_2,...) under the domain what ... See more...
hello, I Have a machine Windows server 2012 r2, I configure as Active directory, and I create a user (user_1, user_2) and I add a list of computers (Client_1, Client_2,...) under the domain what I want is if a user_1 is fail to log in,  the client_1, then it sends the event code 4625 to the AD machine   
I have a time format issue with Splunk logs . events are not coming correctly against the correct timestamp. in props.conf i have    SHOULD_LINEMERGE = false TIME_PREFIX = \[ TIME_FORMAT = %d/%b/%Y... See more...
I have a time format issue with Splunk logs . events are not coming correctly against the correct timestamp. in props.conf i have    SHOULD_LINEMERGE = false TIME_PREFIX = \[ TIME_FORMAT = %d/%b/%Y:%H:%M:%S %Z MAX_TIMESTAMP_LOOKAHEAD = 26   but the issue is both 26/Aug/2020:10:21:00 and 26/Aug/2020:06:10:21(logs below) are coming under the time 26/Aug/2020:10:21:00. Below are the logs    dateTime="[26/Aug/2020:10:21:00 +0000]" remoteUser="-" firstLine="POST /api/1 HTTP/1.1" httpStatus=200; bytesSent=120 - responseTime=127 dateTime="[26/Aug/2020:06:10:21 +0000]" remoteUser="-" firstLine="POST /api/2 HTTP/1.1" httpStatus=200; bytesSent=1512 - responseTime=14   so when doing timechart span=1s i am getting wrong results why the time matching not working correctly? Any help
What could be the possible reasons of not being able to access the Enterprise Console URL? (considering it was working fine the day before and something changed overnight that we are unaware of?)
How Splunk indexers operations works when it comes into manual detention state ? We are migrating from RHEL 6 - RHEL 8 and here can't do OS upgrade on the same machine , so we will be getting new RE... See more...
How Splunk indexers operations works when it comes into manual detention state ? We are migrating from RHEL 6 - RHEL 8 and here can't do OS upgrade on the same machine , so we will be getting new REHL 8 machines and then we need to install Splunk there and work accordingly. Current setup : 5 indexers (RHEL 6) in indexer cluster , S.F -1 , R.F -1. Our plan is to add new 5 indexers (RHEL in indexer cluster and route all sources to send data to newly added indexers , once that is complete will make sure any of the source is not sending anything to old indexers. Enable manual detention on old indexers so that they dont replicate their data to other indexers (newly added), was going through manual detention documentation (below) and found only four operations are concerned  if peer is in manual detention. stops replicating data from other peer nodes. optionally stops accepting data from the ports that consume external data, causing the peer to no longer index most types of external data. continues to index internal data and stream the data to target peer nodes. continues to participate in searches. https://docs.splunk.com/Documentation/Splunk/8.0.5/Indexer/Peerdetention But I would like to know if peer is in manual detention will it be rolling over the stored indexes data as per retention policy to frozen DB ? Or except above 4 points, all other operation will be as usual even if peer is in detention mode. Why am I concerned because in my case we have hot DB (local mount - 700 GB data on each peer) and cold DB (NAS mount - 2.3 TB each peer) and we are going to share  same mount for new peers as well with creating separate subfolders from NAS (for new peer). Second thing how my S.F and R.F behaves since its 1:1 , because most of the data will be with old peers and those would be in detention mode , are S.F & R.F gonna met , is there any impact in searches. Final step would be run command splunk offline --enforce-count to decommission old peer one by one , but that all depends on above questions(if peer is in manual detention will it be rolling over the stored indexes data as per retention policy to frozen DB ?) , if peer (enabled detention mode) is not purging the data as per retention period then we would not have enough space in NAS storage (cold DB) to store the data which is coming after rolling over form hot DB from new peer.  If old peers (enabled detention mode) continue to purge their data as per retention period in that way will have space cleared from one side and new data getting added to NAS (cold DB) from another side. If all this works , i dont know if R.F is set to 1 means each peer will have one copy for data, if some peer doesnt have , i think they will create new replicate copies when we do decommission. Please suggest         
I've found that upgrading a custom app built with Splunk Add-on Builder is clearing all configured data inputs. Is there a way to prevent this?
Hi All, I'm having an error when alerts/reports are sent by email. I'm getting in python.log this error: 2020-08-26 03:01:23,403 +0200 ERROR sendemail:143 - Sending email. subject="Splunk Repor... See more...
Hi All, I'm having an error when alerts/reports are sent by email. I'm getting in python.log this error: 2020-08-26 03:01:23,403 +0200 ERROR sendemail:143 - Sending email. subject="Splunk Report", results_link="https://splunk/app/search/XXXX", recipients="[u'toto.toto@toto.com']", server="XXX.XXX.XXX.XXX" I can see on the SMTP server RSET when this error is appearing. It arrives with some reports and alerts but never the same. Do you have any idea that can help me? Can we set a "retry" setting in order to try a second time when this error appears? Thank you in advance, Cheers
We are looking for integration Splunk with AWS. For that, we need to clarify which of Splunk products is most suitable .for that. We have to handle IT security, IT Operations, and AWS Cloud.
Hi everyone how the data can be normalized to a specified format on search.  it can happen that sometimes the same name occurs in different notations. it  shall be normalized so that there is: A ... See more...
Hi everyone how the data can be normalized to a specified format on search.  it can happen that sometimes the same name occurs in different notations. it  shall be normalized so that there is: A single whitespace before and after ‘:’ A single whitespace before and after ‘/’ A single whitespace after ‘;’ I went through this documentation:  https://docs.splunk.com/Documentation/CIM/4.16.0/User/UsetheCIMtonormalizedataatsearchtime but couldn't find a solution, can someone help please
Hi all, I'm using the (excellent) TrackMe app which uses a Metrics Index. The index has been created on a Indexer Cluster and I've verified that it is actually there ( /opt/splunk/bin/splunk list in... See more...
Hi all, I'm using the (excellent) TrackMe app which uses a Metrics Index. The index has been created on a Indexer Cluster and I've verified that it is actually there ( /opt/splunk/bin/splunk list index -datatype metric ). However, when I try and search the index using  | mcollect split=t index="trackme_metrics" I get the following: "Error in 'mcollect' command: Must specify a valid metric index" This is the 1st and only metrics index on our cluster so I cannot verify that other metrics indexes work OK. Also, the only suggested resolution to this seems to be that I should put the metrics index on our searchhead cluster - but that makes no sense to me! Am I doing something wrong or is there some setting that I need to configure before I can use a metrics index? Many thanks, Mark.
Hi everyone,   I have installed Boss of the SOC v3 by manual from GitHub and after all actions, I have the error "404 Not Found" on page ".../en-US/custom/SA-ctf_scoreboard/scoreboard_controller/su... See more...
Hi everyone,   I have installed Boss of the SOC v3 by manual from GitHub and after all actions, I have the error "404 Not Found" on page ".../en-US/custom/SA-ctf_scoreboard/scoreboard_controller/submit_question?Number=101&Question..." when I tried to send an answer.   What is the root cause of this issue and how to fix it? Thanks
following the install instruction to install the app agent === From the root directory of your Node.js application, run this command: npm install appdynamics@20.8.0 === But seems the highest v... See more...
following the install instruction to install the app agent === From the root directory of your Node.js application, run this command: npm install appdynamics@20.8.0 === But seems the highest version is 20.7.0   https://www.npmjs.com/package/appdynamics