All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good day everyone   How can I visualize and edit this query to show the status of our servers, ONLINE/OFFLINE ?    
I have an interesting problem that I am not sure how to solve.  I have a CSV that I am monitoring.  The CSV has approximately 232 column headings given its a big data source.  The data is being pulle... See more...
I have an interesting problem that I am not sure how to solve.  I have a CSV that I am monitoring.  The CSV has approximately 232 column headings given its a big data source.  The data is being pulled in but there are some columns that are not being indexed for some reason.  For example a missing column heading is say "comp_ip".    SPL search in both smart or verbose mode, doesn't show the field "comp_ip".  If I then write SPL as follows       index=foo sourcetype=csv | dedup comp_ip | table comp_ip       then SPLUNK happily shows me a table with all the values.  If I run my search in verbose mode and then look back at "events" I can see my field in the interesting fields.  However if I then revert back to normal search (i.e. index=foo sourcetype=csv) then my interesting fields no longer show this field.  I have also checked to make sure that there are no more "interesting fields" that have not been selected.   If I manually take the CSV file and do a manual "Add Data" and apply the sourcetype, I can see the column "comp_ip" with the relevant data.     I am at a loss..
Hello Splunkers, I'm working on creating a DB health check report. Idea is to get the  error info when there is  a failed db connection.  When I'm trying to run the search below in Splunk QA I'm ge... See more...
Hello Splunkers, I'm working on creating a DB health check report. Idea is to get the  error info when there is  a failed db connection.  When I'm trying to run the search below in Splunk QA I'm getting an error as Error in 'rex' command: Encountered the following error while compiling the regex '^(?<error>.*)\n?': Regex: syntax error in subpattern name (missing terminator). Could you please help me resolve this issue? Thanks in advance.  index="_internal" sourcetype=dbx_job_metrics input_name=* connection="*" | eval event_time=strftime(_time,"%m/%d/%y %H:%M:%S") | join type=left connection [search index="_internal" sourcetype=dbx_server ERROR | rex field=_raw "^(?<error>.*)\n?" | rex field=error "/api/connections/(?<connection>[^/]+)"] | stats latest(event_time) as event_time latest(host) as HF latest(connection) as connection latest(status) as status latest(error) as error by input_name | sort - status
Hi, I need to change the back ground color of a dashboard in splunk cloud. How can we achieve the same in the source code.  Can we directly add in the source code as we cannot place the files in bin... See more...
Hi, I need to change the back ground color of a dashboard in splunk cloud. How can we achieve the same in the source code.  Can we directly add in the source code as we cannot place the files in bin folder of splunk cloud due to access issue Attaching the image of the dashboard.    
Hi, I am using the Splunk Add-on for AWS app (5.01) to ingest data from SQS/S3 to Splunk 8.0.3 on-prem. Our network gateway requirements is that the traffic is going through a proxy. For some reas... See more...
Hi, I am using the Splunk Add-on for AWS app (5.01) to ingest data from SQS/S3 to Splunk 8.0.3 on-prem. Our network gateway requirements is that the traffic is going through a proxy. For some reason, this add-on is not able to communicate with proxy setting to establish connection to AWS SQS queue. It's keep hitting the below errors: level=ERROR pid=14252 tid=MainThread logger=splunk_ta_aws.modinputs.sqs.aws_sqs_data_loader pos=aws_sqs_data_loader.py:log:93 | | message="Failed to get SQS queue url" .............................. Files\\Splunk\\etc\\apps\\Splunk_TA_aws\\bin\\3rdparty\\python3\\botocore\\httpsession.py\", line 283, in send\n raise EndpointConnectionError(endpoint_url=request.url, error=e)\nbotocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: \"https://sts.amazonaws.com/\"\n" ========== However, I am able to get SQS queue URL when I use AWS CLI in windows power-shell and CMD respectively: PS C:\> aws sqs get-queue-url --queue-name C*******-Sqs { "QueueUrl": "https://sqs.ap-southeast-2.amazonaws.com/1*************/C***********-Sqs" } I noticed from the Splunk Add-on for AWS web UI there are proxy settings. When set, it generates a passwords.conf file containing a hash - I am guessing that it is the Proxy URL hashed. As we do everything via configuration management, it would be awesome to just have this in the conf file. I did notice that in Splunk Add-on for AWS's aws_settings.conf file, there is a Proxy stanza. When I set the options in this it doesn't take effect in the Web UI. Where can I find the proxy conf file for this addon? How can I identify whether this addon is using my proxy settings at all? I am stuck with this issue for days, and please help!
hello splunker.  I would like to monitor the same file in another folder as below. each host is a folder name. it is works in one app. The file names in the folder may be the same or different. ... See more...
hello splunker.  I would like to monitor the same file in another folder as below. each host is a folder name. it is works in one app. The file names in the folder may be the same or different. my setting : input.conf [monitor://D:\Splunk\Check\TEST*\*.csv] disabled = false host_regex = index = test host = host_segment = 3 sourcetype = testcheck crcSalt = <<SOURCE>> ty help me
We have our authentication tied to AD using the LDAP strategy.   Password complexity and lifetime is, as a result, handled by the requirements set by the AD Group Policy.  But what about failed login... See more...
We have our authentication tied to AD using the LDAP strategy.   Password complexity and lifetime is, as a result, handled by the requirements set by the AD Group Policy.  But what about failed login attempts?  If a user manages to type in their password incorrectly multiple times (or worse, someone tries to incorrectly guess a user's password multiple times), will it cause their account to become locked within Splunk (and possibly within the underlying OS too?)
I'm looking to get some json data from our anomaly detection system into the Intrusion Detection data model and thus need to map the fields to the CIM. The json events vary depending on the model bei... See more...
I'm looking to get some json data from our anomaly detection system into the Intrusion Detection data model and thus need to map the fields to the CIM. The json events vary depending on the model being 'breached' and therefore not all events will contain the dest and src fields in the same place.   The json data contains many multi-value fields and because the required data is not always in its own single-value field (I can just alias those) but is sometimes in an array, but not always in the same place  (depending on the triggers that caused each model to breach) so I need to search each array for certain indicators such as "Destination Endpoint" (lets call this 'A') and then map the actual endpoint name ('B') from another field, using the array location of A. I've been looking at the mvfind command but before I spend a great deal of time on this I was wondering if my approach is correct or if what I want to do is even possible in the first place. E.g. can I use the output of mvfind as an input of spath maybe?   Once I've got a search working I'll be looking to extract the values automatically and I'm assuming at search time would still be okay for the data model? The frequency of events is not very high (one or two every five minutes or less) so I don't think that an index-time extraction would put too much load on the HF/INDXR.   I can't really share an example event due to the nature of it's contents but let me know if there's any more info that would help. Thanks very much in advance.
For Anomaly detection, on string field, which method is better - Zscore or histogram? Please suggest
Hello, I'm trying to chart typical week of our web application users based on data from last 4 weeks. Idea is, roughly explained, that I would calculate sum of request group (login, user accounts, e... See more...
Hello, I'm trying to chart typical week of our web application users based on data from last 4 weeks. Idea is, roughly explained, that I would calculate sum of request group (login, user accounts, etc - already done) per day and then created some type of "7 day window" in which there would be (seen in a graph) only 7 days but each day would be average of that day from last month. So in a graph there would be (for example in case of request_group='login'): Monday - 10 - which si average of sum in all mondays (10, 10, 5, 15, 10) Tuesday - 8 - which is avg of sum in all tuesdays (8, 10, 6, 8, ...and so on up until Sunday part of my search is: host "server" sourcetype="access_combined" ... some eval stuff ... | fields _time request_group ... here should by magic calculating data ... Thank you in advance. I've already tried different approach using streamstats or timewrap, but nothing worked as I intended.
Hi, I am trying to create a search the looks for specific signatures detected on the IPS and then returns all related firewall and proxy logs, grouped by each related set of events. I have written ... See more...
Hi, I am trying to create a search the looks for specific signatures detected on the IPS and then returns all related firewall and proxy logs, grouped by each related set of events. I have written the following query. It is returning the correct results but taking hours to run and looks like the time fields are not being parsed to the outer search: (index=ips OR index=firewall OR index=proxy)      [search  index=ips signature_id  IN (25007, 25008,25009)      | eval earliest=_time-300      | eval latest = _time+60      | fields earliest latest src_ip] transaction src_ip   The search is run over 90days. The inner search completes after around 30sec and returns 6 results. I am wanting to run the outer search for each of the six results (5min before the IPS event to 1min after). While the results a get are correct, the search took 10hrs to run. If I manually enter the earliest and latest and src_ip  into the following each result only takes around 2min: (index=ips OR index=firewall OR index=proxy) earliest=X latest=Y src_ip=Z transaction src_ip   So I think the outer search is being run 6 times for either 'All Time' or 'Last 90days'. Can anyone assist me with getting the earliest and latest to parse so that it only runs the query on a 6min range for each result of the inner search?   Thanks very much.
Hi All, Really hoping someone out there can help me with this. We have an in house app that generates message logs which contain SQL. Each query can be different so simple regex extraction wont wo... See more...
Hi All, Really hoping someone out there can help me with this. We have an in house app that generates message logs which contain SQL. Each query can be different so simple regex extraction wont work because the query can change.  Below are 2 _raw examples of different queries: Example 1:       {"message":"Completed SQL Query","context":{"query":"INSERT INTO \"Messages\" (\"toAddress\", \"fromAddress\", \"templateId\", \"subject\", \"senderObjectTypeId\", \"senderObjectId\", \"ccAddresses\", \"entityId\", \"addresseeObjectTypeId\", \"addresseeObjectId\", \"addresseeId\", \"senderId\", \"type\", \"inbound\", \"status\", \"groupNo\", \"priority\", \"objectKey\", \"uuid\", \"created\", \"createdBy\", \"updated\", \"updatedBy\") VALUES ('somone@hotmail.com', 'something@mail.com', '12345', 'invoice', '1', '1234', '<array>\n<XML_Serializer_Tag>something@mail.co</XML_Serializer_Tag>\n</array>', '123', '12', '12347', '1234564', '123456', 'Email', 0, 'queued', '12345678', '1', 'messages/11111-1111-1111-1111-111111111', '11111-1111-1111-1111-111111111', '2020-09-02T09:10:31+04:00', 12345678, '2020-09-02T09:10:31+04:00', 12345678)       Example 2:       {"message":"Completed SQL Query","context":{"query":"INSERT INTO \"Messages\" (\"parentId\", \"subject\", \"status\", \"entityId\", \"inbound\", \"spamScore\", \"spamReport\", \"type\", \"addresseeObjectTypeId\", \"addresseeObjectId\", \"addresseeId\", \"toAddress\", \"fromAddress\", \"senderId\", \"objectKey\", \"uuid\", \"created\", \"createdBy\", \"updated\", \"updatedBy\") VALUES ('111111', 'Invoice for you', 'received', '11', 1, '1', 'Spam detection software, running on the system \"xyz.net\", has\nidentified this incoming email as possible spam. The original message\nhas been attached to this so you can view it (if it isn''t spam) or label\nsimilar future email. MIME_HTML_ONLY BODY: Message only has text/html MIME parts\n\n', 'Email', '1', '1234', '123456', 'recipient@mai.com', 'sender@hotmail.com', NULL, 'messages/111111-1111-1111-1111-11111111', '111111-1111-1111-1111-11111111', '2020-08-27T01:28:14+00:00', 1, '2020-08-27T01:28:14+00:00', 1)         As you can see in the above example the SQl fields can different and/or the same but in a different order. Is there a way i can extract the fields based on the the "INSERT INTO" fields? So parentID, toAddress will know to extract the fields in that order etc etc? IE- it will get the field names based on the first "INSERT INTO" section and populate from the "VALUES", regardless of the order? Would i create an extraction for the "INSERT INTO" fields and "VALUES" fields then SPATH them? EG This Regex will work for Example 2 but not example 1 (Also for now i dont need comments on my regex now...just answers on thew core of my question please)     ^[^\)\n]*\)\s+\w+\s+\('(?P<parentId>\d+)[^ \n]* '(?P<subject>[^']+)',\s+'(?P<message_status>[a-z]+)',\s+'(?P<entityId>[^']+)(?:[^'\n]*'){2}(?P<spam_score>\d+\.\d+)(?:[^,\n]*,){6}\s+'(?P<type>\w+)(?:[^'\n]*'){4}(?P<addresseeObjectId>\d+)[^ \n]* '(?P<recipient_Id>\d+)[^ \n]* '(?P<RecipientAddress>[^']+)',\s+'(?P<SenderAddress>\w+@\w+\.\w+)       Thanks in advance, i know its a convoluted  Any insights id really appreciate 
Email server configuration was set up by Mail server team. Then i received mail for alerts and reports. Now i am not receiving any mail for alerts and reports. When i check splunk logs i see  ERROR... See more...
Email server configuration was set up by Mail server team. Then i received mail for alerts and reports. Now i am not receiving any mail for alerts and reports. When i check splunk logs i see  ERROR:root:Connection unexpectedly closed while sending mail to alxxx&xxxx.com. Please help here. How to solve this issue.
Hi Everyone, I have to create multiple dashboards for different services, and need to configure it into one single master dashboard through dropdown menu.  Can someone please advice. Regards, Anu... See more...
Hi Everyone, I have to create multiple dashboards for different services, and need to configure it into one single master dashboard through dropdown menu.  Can someone please advice. Regards, Anu    
I was wondering why all of the filters implemented are not working. Below is my props.conf & transforms.conf file props.conf [source::L:\\sample\\logs\\collections...*>] TRANSFORMS-set= samplecoll... See more...
I was wondering why all of the filters implemented are not working. Below is my props.conf & transforms.conf file props.conf [source::L:\\sample\\logs\\collections...*>] TRANSFORMS-set= samplecollectionlogs [source::L:\\sample\\logs\\(?:commands|webapps|partions)...*>] TRANSFORMS-set1= samplecommandlogs [source::L:\\sample\\logs\\engines...*>] SEDCMD-maskfilterlist = s/\(\(not\(deniedlist1 in \('.*'\)\)\)\) /((not(deniedlist1 in ('_content_removed_by_splunk')))) /   transforms.conf   [samplecollectionlogs] REGEX = (^\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\s(\{\d+\}\s)?(mapping|custom|TreePrefixBuilder|XB|ScdLookup|\s|\})|^[^0-9\]]) DEST_KEY = queue FORMAT = indexQueue [samplecommandlogs] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Also my doubts is L:\\sample\\logs path are not defined in my heavy splunk(i.e where my props & transforms file reside) but these paths are defined in inputs of the universal forwarders. Source will also consider the monitor path from universal forwarders or should i define in heavy forwarder as well      
I created a calculated field in my datamodel, freight_service_error_list_martin, called loggerPackage that is the extraction of the Java package of the logger. When I selected preview I saw this fiel... See more...
I created a calculated field in my datamodel, freight_service_error_list_martin, called loggerPackage that is the extraction of the Java package of the logger. When I selected preview I saw this field was populated correctly and it also appears under CALCULATED when I view this datamodel. However the calculated field does not appear when executing a search on this datamodel: "|datamodel freight_service_error_list_martin search" What am I doing wrong? datamodel fields calculated field missing in datamodel search
Hi everyone, I have trouble to decode the token which contains some special character such as (). Below is my search and it does not return any result because the ObjectName contains the () in it: ... See more...
Hi everyone, I have trouble to decode the token which contains some special character such as (). Below is my search and it does not return any result because the ObjectName contains the () in it:     index=wineventlog EventCode=4660 OR EventCode=4663 Account_Name!="ANONYMOUS LOGON" host="myServer" Account_Name!="*$" | eval ObjectName=urldecode("D:\Company Data\Employee\contract\Michael (Tim)\Induction\Example (D) - .msg") | eval ObjectName=replace(ObjectName,"\\\\","\\\\\\") | where match(Object_Name,ObjectName) | dedup _time host Account_Name Account_Domain Object_Name Accesses EventCodeDescription | timechart span=60m count(Object_Name) as Changes by Account_Name     If the ObjectName does not contain () then it will work well. Could anyone help me decode it? Thanks, Toni  
Hi everyone, I have trouble to decode the token which contains some special character such as (). Below is my search and it does not return any result because the ObjectName contains the () in it: ... See more...
Hi everyone, I have trouble to decode the token which contains some special character such as (). Below is my search and it does not return any result because the ObjectName contains the () in it:   index=wineventlog EventCode=4660 OR EventCode=4663 Account_Name!="ANONYMOUS LOGON" host="myServer" Account_Name!="*$" | eval ObjectName=urldecode("D:\Company Data\Employee\contract\Michael (Tim)\Induction\Example (D) - .msg") | eval ObjectName=replace(ObjectName,"\\\\","\\\\\\") | where match(Object_Name,ObjectName) | dedup _time host Account_Name Account_Domain Object_Name Accesses EventCodeDescription | timechart span=60m count(Object_Name) as Changes by Account_Name   If the ObjectName does not contain () then it will work well. Could anyone help me decode it? Thanks, Toni  
Hi all I have a threat feed that is available via using an API key only, I could not see any way to add the API key to the threat intel download option? I can manually create a CURL command to down... See more...
Hi all I have a threat feed that is available via using an API key only, I could not see any way to add the API key to the threat intel download option? I can manually create a CURL command to download the file to the right folder but this will not be registered as an intel file. Thanks  
Hi all,  I have a request from a tenant in our environment that requires us to create a dashboard where each column is a date and each row has various criteria. We accomplished this by using the fol... See more...
Hi all,  I have a request from a tenant in our environment that requires us to create a dashboard where each column is a date and each row has various criteria. We accomplished this by using the following search structure:   [base search] | timechart limit=0 span=1d useother=false count as "Row 1" by sourcetype | fillnull | reverse | untable _time, sourcetype, "Row 1" | eval Time= strftime(_time, "%m-%d-%y") | table Time, "Row 1" | transpose header_field=Time 0 |append [search [base search] | timechart limit=0 span=1d useother=false count as "Row 1" by sourcetype | fillnull | reverse | untable _time, sourcetype, "Row 2" | eval Time= strftime(_time, "%m-%d-%y") | table Time, "Row 2" | transpose header_field=Time 0] ...   Due to the variations in search criteria for each row, it makes the most sense to simply append a new row. The results end up looking like the following: The problem I am having is that one of the searches produces no results almost all of the time (note that Row3 is missing). This tenant would like this to show "Row3" as a row of 0's implying that there were no events that match the specified criteria for that row. Does anybody have a good way to create a timechart table of all 0's for searches that return "No results found"? I have seen a lot of questions and answers on here that basically use an append to give a single value of 0 but for this use case I would essentially like to get a "0" for each date on the table like the following :