All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've been working with the /services/search/jobs/export API recently and I noticed that setting the output mode to 'json' can cause responses to be suppressed. Here's an example:   curl -u $USER:$P... See more...
I've been working with the /services/search/jobs/export API recently and I noticed that setting the output mode to 'json' can cause responses to be suppressed. Here's an example:   curl -u $USER:$PASSWORD -k https://<splunk>/services/search/jobs/export -d search='search=savedsearch "my_search"' <?xml version='1.0' encoding='UTF-8'?> <response><messages><msg type="FATAL">Error in 'search' command: Unable to parse the search: Comparator '=' is missing a term on the left hand side.</msg></messages></response>   This same request in a different output mode has no response content.   curl -u $USER:$PASSWORD -k https://splunk.drwholdings.com:8089/services/search/jobs/export -d search='search=savedsearch "my_search"' -d "output_mode=json"   Is there some other flag I need to set to have these errors come through in JSON mode? Requests that don't result in error responses return fine. Both requests come back with status code 200.
Hi all! I'm trying to get Security Essentials to recognize Mimecast for it's Email requirement under Data Inventory. It does not recognize it and only gives onboarding info on O365.  I've got Mi... See more...
Hi all! I'm trying to get Security Essentials to recognize Mimecast for it's Email requirement under Data Inventory. It does not recognize it and only gives onboarding info on O365.  I've got Mimecast for Splunk installed and all the dashboards show up. I've updated the "Email" data model to include the index "mimecast". It is accelerated and contains data.    The CIM Usage Dashboard shows data in the Email data model, populates the dataset field and shows results.  The SA_CIM_Validator also recognizes the Email datamodel as having the Mimecast data.  SO What am I missing, folks? Much appreciate any thoughts, ideas you can share.   
Hi How can extract these fields: field1=Version field2=Author field3=Date field4=IssueNo   Here is the log: 23:53:00.512 app module: Abc , Ver:21.2 , 21/10/10 By: J_Danob customer 03:10:15.3... See more...
Hi How can extract these fields: field1=Version field2=Author field3=Date field4=IssueNo   Here is the log: 23:53:00.512 app module: Abc , Ver:21.2 , 21/10/10 By: J_Danob customer 03:10:15.394 app module: cust_Pack.C, Ver:2.4, Last Updated:21/02/06, by:Jefri.Poor 22:21:51.398 app module: My Properties : Ver. 2.0, Last Updated: 20/03/02, By: Alex J Parson 04:11:26.184 app module: api.C, Ver.:6.0 , Last Updated: 21/11/05, By: J_Danob IssueNo: 12345 04:05:01.488 app module: AjaxSec.C , Ver: 2, 21/07/08 By:J_Danob app 12:27:24.259 app module: L: FORWARD 10 VER 6.1.0 [2021-05-04] [app] Ticket_Again BY Jack Danob 04:11:27.643 app module: [0]L: FORWARD 10 VER 6.2.7 [2021-08-17] [CUST] [ISSUENO:98765] [BY J_Danob] [Edit] 23:53:00.512 app module: Container Version 2.0.0 Added By Jack Danob Date 2021-01-01 23:53:00.512 app module: [0]L: ForwarderSB Version 3 By Danob 21/1/31 check all 04:11:26.186 app module: ApiGateway: Version[2.2.0] [21-09-26] [IssueNo:12345] [BY Jefri.Poor] [Solving]   expected output: Version Date                    Author                    IssueNo 21.2         21/10/10      J_Danob 2.4            21/02/06      Jefri.Poor 2.0            20/03/02      Alex J Parson 6.0            21/11/05      J_Danob               12345 2                21/07/08      J_Danob 6.1.0        2021-05-04 Jack Danob 6.2.7        2021-08-17 J_Danob                 98765 2.0.0        2021-01-01 Jack Danob 3                21/1/31          Danob 2.2.0        21-09-26        Jefri.Poor               12345   Thanks,
Hey all,  I got a really helpful response last time and now I'm back with another question.  I have a search with the same sourcetype that I want to run multiple clauses against to return different... See more...
Hey all,  I got a really helpful response last time and now I'm back with another question.  I have a search with the same sourcetype that I want to run multiple clauses against to return different results in a table for comparison. Example: sourcetype = xyz | where (color == "red" OR color == "blue" OR color == "purple" OR color == "green") AND (crayon == "crayola" OR crayon == "prisma" OR crayon == "offBrand" OR crayon == "brandA") |  some stuff here that matches the search to a lookup | stats count(name) as "All sets" by Type (say name and type is the information pulled from the lookup) | where (color == "red" OR color == "blue) AND (crayon != "crayola" AND crayon != "offBrand") | stats count(name) as "Set A" by Type | where (color == "red" OR color == "blue) AND (crayon != "prisma" AND crayon != "brandA") | stats count(name) as "Set B" by Type  The end result I want is this:   All sets Set A Set B 5 3 2   I know it's bad practice to use join and append. I also know the where clauses are supposed to be higher up. I'm just not sure how to achieve this. I can get the 'All sets' just fine of course but after that nothing works. Any help for this newbie would be much appreciated. Thanks!
I am trying to use AWS Cognito to authenticate to a Splunk dashboard using SAML.  There is a lot of information on configuring Cognito with other vendors,  but not a lot of information on how to do t... See more...
I am trying to use AWS Cognito to authenticate to a Splunk dashboard using SAML.  There is a lot of information on configuring Cognito with other vendors,  but not a lot of information on how to do this with Splunk.  I have been trying to piece together settings from various documents I found during my research, but I don't know a lot about SAML. I downloaded the Splunk Metadata file and uploaded it in Cognito, but I get an error stating  "We were unable to create identity provider: No IDPSSODescriptor found in metadata for protocol urn:oasis:names:tc:SAML:2.0:protocol and entity id splunkEntityId ."  I didn't see any IDPSSODescriptor in the uploaded file, which leads me to believe this may be incompatible. My Splunk SAML setting is as follows: [saml] entityId = urn:amazon:cognito:sp:<my cognito pool id> fqdn = testdashboardlb-79456348.us-east-1.elb.amazonaws.com  <-- This is my load balancer idpSLOUrl = https://testdashboard.auth.us-east-1.amazoncognito.com/saml2/logout idpSSOUrl = https://testdashboard.auth.us-east-1.amazoncognito.com/saml2/idpresponse inboundDigestMethod = SHA1;SHA256;SHA384;SHA512 inboundSignatureAlgorithm = RSA-SHA1;RSA-SHA256;RSA-SHA384;RSA-SHA512 issuerId = urn:amazon:cognito:sp:my cognito pool id> lockRoleToFullDN = true redirectAfterLogoutToUrl = testdash.xxxxxxxxx.com redirectPort = 443 replicateCertificates = false signAuthnRequest = false signatureAlgorithm = RSA-SHA1 signedAssertion = true sloBinding = HTTP-POST ssoBinding = HTTP-POST [authentication] authSettings = saml authType = SAML   I can authenticate and enter my MFA token.  After that, I receive an error "Required String parameter 'SAMLResponse' is not present." Any help is appreciated.
Hello, In the Monitoring Console Summary Dashboard, under Deployment Metrics, there is an "Avg. Search Latency" indicator. I searched in the official documentation but I didn't found an extensive ex... See more...
Hello, In the Monitoring Console Summary Dashboard, under Deployment Metrics, there is an "Avg. Search Latency" indicator. I searched in the official documentation but I didn't found an extensive explanation. What does this metric shows? Thanks a lot, Edoardo
Hello Splunk ninjas, We all know about scheduled reports configured to use a schedule window - when they run delayed,  they still gather data for the time range that they would have covered if they ... See more...
Hello Splunk ninjas, We all know about scheduled reports configured to use a schedule window - when they run delayed,  they still gather data for the time range that they would have covered if they started on time. In short - it will search over the time range it was originally scheduled to cover. What happens when the search query is using now() function ? Like many of the ESCU correlation searches... Example: There is a query containing : | where firstTimeSeen > relative_time(now(),1h) The report is scheduled every hour (cron =  0 * * * *)  using a search time range earliest=now, latest=-70min. Schedule window = auto. And this is a busy day therefore our query is executed 40 minutes later than scheduled. As mentioned at the begining, the time range used doesn't change, it's still :00 - :59 (previous hour). However, the now() has this definition :This function takes no arguments and returns the time that the search was started. The result set of the report is different now. Is this behavior flawed by design ? Many of the ES/ESCU correlation searches use this kind of filtering ( based on now()). How to solve this ? no schedule window ? no auto ? higher priority ? durable search ? real-time mode instead of continuous ? Thanks for your educated answers.
Hello all, I'm using a lookup table with a _time field to create a timechart which works great.   However, the lookup table has data for say 90 days and I don't always want the timechart to be for ... See more...
Hello all, I'm using a lookup table with a _time field to create a timechart which works great.   However, the lookup table has data for say 90 days and I don't always want the timechart to be for the full 90 days.   How can I limit my timechart to 30 days from my lookup table that has 90 days worth of data without deleting the extra 60 days?   The _time field is in already in the format %y-%m-%d %H:%M I've tried  |inputlookup mylookupfile where earliest=-30d Thank you!
Hi Experts,                    As part of an new initiative looking at SLO metrics. I have created the below query which nicely counts the amount of errors per day over a 30 day window and also prov... See more...
Hi Experts,                    As part of an new initiative looking at SLO metrics. I have created the below query which nicely counts the amount of errors per day over a 30 day window and also provides a nice average level on the same graph using an overlay for easy viewing. earliest=-30d@d index=fx ERROR sourcetype=mysourcetype source="mysource.log" | rex field=source "temp(?<instance>.*?)\/" | stats count by _time instance | timechart span=1d max(count) by instance | appendcols [search earliest=-30d@d index=fx ERROR sourcetype=mysourcetype source="mysource.log" | rex field=source "temp(?<instance>.*?)\/" | stats count by _time instance | stats avg(count) AS 30d_average]|filldown 30d_average I wanted to somehow work out the percentage of good results (anything that is lower then the average value) and the percentage of bad results (above the average) and show in a stats table for each instance. Help needed! thanks in advance Theo
Can someone help me to build a search query for the below use case ?   My use case is to detect if any S3 buckets have been set for Public access via PutBucketPolicy event. So far thanks to help f... See more...
Can someone help me to build a search query for the below use case ?   My use case is to detect if any S3 buckets have been set for Public access via PutBucketPolicy event. So far thanks to help from Folks ( @ITWhisperer and @isoutamo  on this Community,   i have got my search to check for fields  Effect and Principal both should have values  "Allow"  and " *  or {AWS:*} "  respectively for the same SID.   Basically the following 2 conditions must be met for a particular SID. Effect: Allow Principal:  *  OR {AWS:*} -----------------------  Next i want to further filter based on the field "Condition" .   How to just filter if "Condition" exists or not ? Below is a snippet of raw event data           "eventName": "PutBucketPolicy" "awsRegion": "us-east-1" "sourceIPAddress": "N.N.N.N" "userAgent": "[S3Console/0.4 aws-internal/3 aws-sdk-java/1.11.1002 Linux/5.4.129-72.229.amzn2int.x86_64]" "requestParameters": {"bucketPolicy": {"Version": "2012-10-17" "Statement": [{"Sid": "Access-to-specific-VPCE-only" "Effect": "Allow" "Principal": "*" "Action": "s3:*" "Resource": "arn:aws:s3:::abc-logs/*" "Condition": {"StringEquals": {"aws:sourceVpce": "XXX"}}}] "Id": "Policy14151152"} "bucketName": "Bucket-name" "Host": "host.xyz.com" "policy": ""} ============= "eventName": "PutBucketPolicy" "awsRegion": "us-east-1" "sourceIPAddress": "N.N.N.N" "userAgent": "[S3Console/0.4 aws-internal/3 aws-sdk-java/1.11.1002 Linux/5.4.116-64.217.amzn2int.x86_64 OpenJDK_64-Bit_Server_VM/Oracle_Corporation cfg/retry-mode/legacy]" "requestParameters": {"bucketPolicy": {"Version": "2012-10-17" "Statement": [{"Effect": "Allow" "Principal": "*" "Action": ["s3:List*" "s3:Get*"] "Resource": "arn:aws:s3::/*" "Condition": {"IpAddress": {"aws:SourceIp": ["N.N.N.N" "N.N.N.N"]}}}]} "bucketName": "bucket-name" "Host": "abc.xyz.com" "policy": ""}           I have tried the below 3 options to check for the presence of the field Condition  , but none are working.  These end up showing Events where the raw data contains a Condition defined.  I want my search to not exclude those events which contain Condition         | spath requestParameters.bucketPolicy.Statement{} output=Statement | mvexpand Statement | spath input=Statement | where Effect="Allow" | where Principal="*" OR Principal.AWS="* | where isnull(Condition) OR | where Condition="" OR |search Condition=""                      
Hi what is the rex for this field1=this is message here is the log: 00:09:59.990 app module: AB[0000]: Data[{"code":"OK","messageEn":"this is message","messageCa":null,"id":"0"} Thanks,
I'm trying to extract the data from logs and display the count  based on 2 fields. Below are the sample data logs, 14:48:23.668 INFO - Response(Uuid=1e850916-f99d-1e35a8d3c474, pojo=[Pojo(id=ID0047... See more...
I'm trying to extract the data from logs and display the count  based on 2 fields. Below are the sample data logs, 14:48:23.668 INFO - Response(Uuid=1e850916-f99d-1e35a8d3c474, pojo=[Pojo(id=ID0047, flg=false), Pojo(id=ID0065, flg=false), Pojo(id=ID0105, flg=true), Pojo(id=ID0106, flg=true), Pojo(id=ID0066, flg=false), Pojo(id=ID0108, flg=false)]) 14:48:23.676 INFO - Response(Uuid=c5ec43a2-8c07-c56f9f5bbd1f, pojo=[Pojo(id=ID0106, flg=false), Pojo(id=ID0107, flg=false), Pojo(id=ID0068, flg=true), Pojo(id=ID0105, flg=false), Pojo(id=ID0064, flg=true), Pojo(id=ID0108, flg=false), Pojo(id=ID0047, flg=false)]) 14:48:23.690 INFO - Response(Uuid=eac5f53e-6407-eac356ca0458, pojo=[Pojo(id=ID0107, flg=false), Pojo(id=ID0047, flg=true), Pojo(id=ID0067, flg=false), Pojo(id=ID0106, flg=false), Pojo(id=ID0068, flg=false), Pojo(id=ID0108, flg=false)])   Below is the current query , <base query | rex field=pojo max_match=0 "Pojo\((?<ID>.*?)\,(?<FLG>.*?)\)" | chart count by ID FLG>   If i'm using one field in count by its giving the correct count ( ID / FLG ) but when i'm use both its not giving correct count as in query.   Sample expected output looks like below, ID   FLG=false   FLG=true ID0047   2   1 ID0107   2   0 ID0065   1   0 .. Kindly help or suggest me.
Is it possible to place the pagination buttons on the top of a dashboard panel rather than have them appear at the bottom of a panel?
Hi,   I am hoping to get some help in creating a search, which will be turned into an alert - I am working with system logs from a monitoring device, where a log is submitted when any one of ~600 s... See more...
Hi,   I am hoping to get some help in creating a search, which will be turned into an alert - I am working with system logs from a monitoring device, where a log is submitted when any one of ~600 servers go down and while the server stays down a new log is dropped every ~10 mins, then if the server comes back up a "Reconnect" log is submitted. I am wanting to get the search to return me the name of a server/agent that has had at least 1 "disconnect" but no "reconnect" entry within a time period and then once a reconnect is received - the server is no longer listed. I am not very experienced with Splunk and thus far only have a search that is returning me counts of both types of events (connect/disconnect): index="XXXlogs" sourcetype="systemlog" eventid="*connectserver" devicename="device1" logdescription="Agent*" |  stats count by win_server, event_id Any help is appreciated.
Hi, I just installed the Configuration explorer in order to edit my transforms.conf. First I edited the settings file to set write-access = true . I then restarted splunk. When I now try to edit the ... See more...
Hi, I just installed the Configuration explorer in order to edit my transforms.conf. First I edited the settings file to set write-access = true . I then restarted splunk. When I now try to edit the transforms.conf I cant save my changes. An error message appers saying: This file cannot be saved. Is there another way to edit the file or how can I enable writing via the conf explorer?
hi  what is rex for these three fields? here is the log: 2021-10-14 12:51:20,412 INFO [APP] log in : A12345@#4321@california 2021-10-14 12:51:20,412 INFO [APP] log in : D12345@torrento 2021-10-1... See more...
hi  what is rex for these three fields? here is the log: 2021-10-14 12:51:20,412 INFO [APP] log in : A12345@#4321@california 2021-10-14 12:51:20,412 INFO [APP] log in : D12345@torrento 2021-10-14 12:51:20,412 INFO [APP] log in : B12345@#1234@newyork field1=A12345 D12345 B12345 field2=4321 1234 field3=california torrento newyork   thanks
Hello together, we moved our data to a new index cluster and since then we are unable to delete events with the "| delete" query. We have an test system, which is a single server instance that will e... See more...
Hello together, we moved our data to a new index cluster and since then we are unable to delete events with the "| delete" query. We have an test system, which is a single server instance that will execute the same query. Datasets are identical on both systems. Heres a sample command we are trying to run on our clustered server: index=name1 sourcetype=type1 earliest_time=-3d | delete Since the documentation also noted that sometimes you should eval the indexname to delete events, we also did that index=name1 sourcetype=type1 earliest_time=-3d | eval index=name1 | delete   Both queries without the delete command only return a small set of 8 events. If we pipe the result to "delete", then there's no error message or warning. However the returned result table shows that zero files have been deleted. Currently we do have a new search cluster and also our old single search head connected to this index cluster. The old single searchhead was previously also the single instance where we migrated our data from to the new index cluster. Despite that migration nothing has been changed on that servers user/role configuration. Still delete is not working anymore on that search head too.   We did follow all instructions on the splunk documentation to ensure that it is not a configuration problem https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Delete   Additionally we did the following to troubleshoot the delete process: We tried other datasets/indexes on our cluster server -> same result (working on test server) We checked that our user has “can_delete” roles + created new local users with “can_delete” role Both without success. We also noticed that if the user has no “can_delete” role assigned, the query result will also notify that permissions are missing Since we don’t get that message, we believe that the role is set correctly We compared the authorize.conf from our test and cluster system and didn’t see any differences for those roles We checked all servers splunkd logs after sending the delete command and no information/errors are available We checked that on the file system the bucket folders/files have the correct access permissions (rwx) for “splunk” user - We restarted the index cluster We tried the search query directly on the cluster master, on each search head cluster member and on the old single search head of our clustered system We ran splunk healthcheck with no issues We checked bucket status for the index cluster We checked monitoring console for indexers with no issues We ran | dbinspect for the index and checked if the listed filesystem paths are accessible by the splunk user We ran the search queries in the terminal via “splunk cli”, with no errors or additional messages being shown Both test and cluster servers are running on the same version (8.1.6) The data from the query was also indexed far after the migration
I would like to find all unused serverclasses on a deployment server.
Hello, I am some issues in writing field extraction expression for following events (3 sample events are given below). Each of the events has  comma Separated 14 field values. Most of the cases eve... See more...
Hello, I am some issues in writing field extraction expression for following events (3 sample events are given below). Each of the events has  comma Separated 14 field values. Most of the cases event doesnot have all field values (i.e., no values between 2 commas)    I was trying with this expression  ^(?P<Field1>\w+),(?P<Field2>\w+),(?P<Field3>\w+),(?P<Field4>\w+), But it stuck at Field4 as it doesn't have any values (i.e., no values between 2 commas for Field4) in event  1. Same thing is happening for other events where there is now value between 2 commas. How would I write my field extraction expression (or (REGEX)  ) to extract  14 fields from each of the events considering some fields may not have values (i.e., no values between 2 commas). Any help will be highly appreciated. Thank you so much, appreciate your support in this efforts.   23SRFBB,HESR2,000000000,,TRY5gNbkVnedIIRbrk0A3wWOtE4L,12.218.76.129,2021-10-13 06:39:48 MDT,ISDMCISA,LOGOFF,USER,,,, 34SWFBB,RESG3,000000000,10AB,TFG3nNbkVnedIIDFbrk0A3wWOtE4L,,2021-10-13 06:39:48 MDT,ISDMCISA,LOGOFF,USER,,,, 45SRFBB,SES3X,000000000,,FDTt3nNbkVnedIIBSbrk0A3wWOtE4L,12.218.76.129,2021-10-13 06:39:48 MDT,ISDMCISA,LOGOFF,USER,,,1wqa,XY355
Hi how can I find events that contain non english words? e.g i have log file that some lines contain germany or arabic words, how can i recognize these lines? thanks