All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In the practical Lab environment, how important is it to configure TLS on Splunk servers during the practical Lab, is it mandatory to configure TLS on my environments?
How do I configure the inputs.conf for  Ta_tshark TA_tshark (Network Input for Windows) | Splunkbase
Hi @jonxilinx, The aws:cloudwatch:guardduty source type was intended to be used with a CloudWatch Logs input after a transform from the aws:cloudwatchlogs source type. To use an SQS input, you can ... See more...
Hi @jonxilinx, The aws:cloudwatch:guardduty source type was intended to be used with a CloudWatch Logs input after a transform from the aws:cloudwatchlogs source type. To use an SQS input, you can transform the data on your heavy forwarder. The configuration below works on the following event schema: { "BodyJson": { "version": "0", "id": "cd2d702e-ab31-411b-9344-793ce56b1bc7", "detail-type": "GuardDuty Finding", "source": "aws.guardduty", "account": "111122223333", "time": "1970-01-01T00:00:00Z", "region": "us-east-1", "resources": [], "detail": { ... } } } You may need to adjust the configuration to match your specific input and event format. # local/inputs.conf [my_sqs_input] aws_account = xxx aws_region = xxx sqs_queues = xxx index = xxx sourcetype = aws:sqs interval = xxx # local/props.conf [aws:sqs] TRANSFORMS-aws_sqs_guardduty = aws_sqs_guardduty_remove_bodyjson, aws_sqs_guardduty_to_cloudwatchlogs_sourcetype # local/transforms.conf [aws_sqs_guardduty_remove_bodyjson] REGEX = "source"\s*\:\s*"aws\.guardduty" INGEST_EVAL = _raw:=json_extract(_raw, "BodyJson") [aws_sqs_guardduty_to_cloudwatchlogs_sourcetype] REGEX = "source"\s*\:\s*"aws\.guardduty" DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::aws:cloudwatchlogs:guardduty  
This is a little confusing.  There is nothing to prevent multivalue fields from being used in lookup.  There is no need to mvexpand.  All you need to do is   | lookup whitelistdomains url as emailD... See more...
This is a little confusing.  There is nothing to prevent multivalue fields from being used in lookup.  There is no need to mvexpand.  All you need to do is   | lookup whitelistdomains url as emailDomains output url as match    The above assumes that whitelistdomains contain a field named url. for this match job. To demonstrate, I'm using a lookup table from a previous question called all_urls.  It's content is as follows: url www.url1.com *.url2.com site.url3.com This is an emulation - I just changed lookup name from the above   | makeresults | fields - _time | eval emailDomains = mvappend("www.url1.com", "site.url3.com", "www.url3.com") ``` data emulation above ``` | lookup all_urls url as emailDomains output url as match   This gives emailDomains match www.url1.com site.url3.com www.url3.com www.url1.com site.url3.com
Forget your extractions.  As the code snippet looks exactly like trying to use regex to extract from JSON.  Could you clarify whether the full raw event is in JSON? If it is, do not use regex.  If JS... See more...
Forget your extractions.  As the code snippet looks exactly like trying to use regex to extract from JSON.  Could you clarify whether the full raw event is in JSON? If it is, do not use regex.  If JSON is just part of event, the best option is to use extraction to extract the part that is JSON instead of directly extracting information fragment.
Thank you for illustrating the use case clearly with sample data, logic, and expected result from sample.  But you also want to specify if Json1 and json2 are in the same row/event.  Here is a soluti... See more...
Thank you for illustrating the use case clearly with sample data, logic, and expected result from sample.  But you also want to specify if Json1 and json2 are in the same row/event.  Here is a solution if they are.   | table Json1 json2 | transpose 0 column_name=name | spath input="row 1" | fields - "row 1" | foreach *{} [eval <<MATCHSTR>>_array = mv_to_json_array('<<FIELD>>')] | fillnull value=null | fields - *{} | stats list(*) as * | foreach * [eval "<<FIELD>>" = if(mvcount(mvdedup('<<FIELD>>')) < 2, null(), '<<FIELD>>')] | transpose 0 column_name=KeyName | search "row 1" = * | eval KeyName = if(KeyName LIKE "%_array", replace(KeyName, "_array$", "{}"), KeyName) | eval "Old Value" = mvindex('row 1', 0), "New Value" = mvindex('row 1', 1) | fields - "row 1" | foreach *Value [eval <<FIELD>> = if('<<FIELD>>' != "null", '<<FIELD>>', if(KeyName LIKE "%{}", "[]", null()))]   Here is an emulation you can play with and compare with real data.   | makeresults | fields - _time | eval Json1 = "{ \"id\": \"XXXXX\", \"displayName\": \"ANY DISPLAY NAME\", \"createdDateTime\": \"2021-10-05T07:01:58.275401+00:00\", \"modifiedDateTime\": \"2025-02-05T10:30:40.0351794+00:00\", \"state\": \"enabled\", \"conditions\": { \"applications\": { \"includeApplications\": [ \"YYYYY\" ], \"excludeApplications\": [], \"includeUserActions\": [], \"includeAuthenticationContextClassReferences\": [], \"applicationFilter\": null }, \"users\": { \"includeUsers\": [], \"excludeUsers\": [], \"includeGroups\": [ \"USERGROUP1\", \"USERGROUP2\" ], \"excludeGroups\": [], \"includeRoles\": [], \"excludeRoles\": [] }, \"userRiskLevels\": [], \"signInRiskLevels\": [], \"clientAppTypes\": [ \"all\" ], \"servicePrincipalRiskLevels\": [] }, \"grantControls\": { \"operator\": \"OR\", \"builtInControls\": [ \"mfa\" ], \"customAuthenticationFactors\": [], \"termsOfUse\": [] }, \"sessionControls\": { \"cloudAppSecurity\": { \"cloudAppSecurityType\": \"monitor\", \"isEnabled\": true }, \"signInFrequency\": { \"value\": 1, \"type\": \"hours\", \"authenticationType\": \"primaryAndSecondaryAuthentication\", \"frequencyInterval\": \"timeBased\", \"isEnabled\": true } } }", json2 = "{ \"id\": \"XXXXX\", \"displayName\": \"ANY DISPLAY NAME 1\", \"createdDateTime\": \"2021-10-05T07:01:58.275401+00:00\", \"modifiedDateTime\": \"2025-02-06T10:30:40.0351794+00:00\", \"state\": \"enabled\", \"conditions\": { \"applications\": { \"includeApplications\": [ \"YYYYY\" ], \"excludeApplications\": [], \"includeUserActions\": [], \"includeAuthenticationContextClassReferences\": [], \"applicationFilter\": null }, \"users\": { \"includeUsers\": [], \"excludeUsers\": [], \"includeGroups\": [ \"USERGROUP1\", \"USERGROUP2\", \"USERGROUP3\" ], \"excludeGroups\": [ \"USERGROUP4\" ], \"includeRoles\": [], \"excludeRoles\": [] }, \"userRiskLevels\": [], \"signInRiskLevels\": [], \"clientAppTypes\": [ \"all\" ], \"servicePrincipalRiskLevels\": [] }, \"grantControls\": { \"operator\": \"OR\", \"builtInControls\": [ \"mfa\" ], \"customAuthenticationFactors\": [], \"termsOfUse\": [] }, \"sessionControls\": { \"cloudAppSecurity\": { \"cloudAppSecurityType\": \"block\", \"isEnabled\": true }, \"signInFrequency\": { \"value\": 2, \"type\": \"hours\", \"authenticationType\": \"primaryAndSecondaryAuthentication\", \"frequencyInterval\": \"timeBased\", \"isEnabled\": true } } }" ``` data emulation above ```   The above search gives KeyName New Value Old Value conditions.users.excludeGroups{} ["USERGROUP4"] [] conditions.users.includeGroups{} ["USERGROUP1","USERGROUP2","USERGROUP3"] ["USERGROUP1","USERGROUP2"] displayName ANY DISPLAY NAME 1 ANY DISPLAY NAME modifiedDateTime 2025-02-06T10:30:40.0351794+00:00 2025-02-05T10:30:40.0351794+00:00 name json2 Json1 sessionControls.cloudAppSecurity.cloudAppSecurityType block monitor sessionControls.signInFrequency.value 2 1 For the life of me I cannot figure where does ModifiedDateTime differ.  They look identical to me. We can go more semantic with SPL but as you want the {} notation intact, this is perhaps the most direct.
I've created field extractions in splunkcloud.com, but they don't appear. Here are my extractions: settings>fields>field extractions:  App: searching & reporting, config source: visible in app, Own... See more...
I've created field extractions in splunkcloud.com, but they don't appear. Here are my extractions: settings>fields>field extractions:  App: searching & reporting, config source: visible in app, Owner: sc_admin journal : EXTRACT-destip Inline "dest_ip\":\"(?P<destip>[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)\”" sc_admin search Global | Permissions Enabled object should appear: all apps permissions: apps r/w, sc_admin r/w   journal : EXTRACT-srcip Inline "src_ip\":\"(?P<srcip>[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)\”" sc_admin search App | Permissions Enabled object should appear: this app only (search) permissions: sc_admin r/w   After Add data from a tar.gz file upload, splunkcloud (login as sc_admin)>search>interesting fields: all fields:all fields doesn't include those fields. What am I missing? Btw, if I extract new fields with the same names it objects because they already exist.
request     {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"event":{"resource":"/v1/","path":"/data/v1/","httpMethod":"GET","queryStringParameters":{"identif... See more...
request     {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"event":{"resource":"/v1/","path":"/data/v1/","httpMethod":"GET","queryStringParameters":{"identifier":"10"},"body":null,"requestContext":{"requestId":"","authorizer":{"principalId":"","integrationLatency":0},"domainName":""}},"msg":"init : data :invoke","time":"","v":0}       response       {"name":"","awsRequestId":"","hostname":"","pid":8,"level":30,"requestType":"GET","entity":"entity","client":"","domain":"","queryParams":{"identifier":"10"},"responseTime":291,"msg":"init: data :responseTime","time":"","v":0}  
Hi @rfdickerson, The Python source code for Splunk's implementation of StateSpaceForecast is collectively in: $SPLUNK_HOMEetc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py $SPLUNK_HOMEetc... See more...
Hi @rfdickerson, The Python source code for Splunk's implementation of StateSpaceForecast is collectively in: $SPLUNK_HOMEetc/apps/Splunk_ML_Toolkit/bin/algos/StateSpaceForecast.py $SPLUNK_HOMEetc/apps/Splunk_ML_Toolkit/bin/algos_support/statespace/* The StateSpaceForecast algorithm is similar to the Splunk predict command. If you're not managing your own Splunk instance, you can download the MLTK archive from Splunkbase and inspect the files directly. The holdback and forecast_k parameters function as described. You may want to look at the partial_fit parameter for more control over the window of data used to update your model dynamically before using apply and (eventually) calculating TPR and FPR.
Hi @Rakzskull, Splunk support can assist with migrations from DDAA (Splunk-provided S3) to DDSS (customer-provided S3).
What you have shared are formatted events, not the raw unformatted data. Please share the unformatted _raw field from your events.
I included this: | search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*" as a placeholder for filtering using Simple XML inputs. The most likely cause of the difference in ... See more...
I included this: | search PROJECTNAME="*" INVOCATIONID="*" RUNMAJORSTATUS="*" RUNMINORSTATUS="*" as a placeholder for filtering using Simple XML inputs. The most likely cause of the difference in the number of results is one of the fields above not being present after spath extracts fields. In your second search, the events missing from the first search would have Status=="Unknown". Have you compared the results at the event level to look for differences other than simple truncation?
Can you explain why you would want to install this on the heavy forwarder? I am trying to install on my SH but cant get the configurations to connect. Would installing on the HF make a difference?
i didnt see any appdynamics specific roles on the UK job market for the last year, but i'm still interested in working with both Splunk and Appdynamics as I have a lot of commercial experience of both.
Appreciate your reply @marnall. After updating the package to use the one built for AArch64, "_cffi_backend.cpython-39-aarch64-linux-gnu.so", the same error still appears.   One interesting obse... See more...
Appreciate your reply @marnall. After updating the package to use the one built for AArch64, "_cffi_backend.cpython-39-aarch64-linux-gnu.so", the same error still appears.   One interesting observation here, older versions of the package that we previously used appear in the error logs after running "splunk-appinspect inspect". These older versions were deleted and replaced with "_cffi_backend.cpython-39-aarch64-linux-gnu.so", and yet after rebuilding and running "inspect", they still appear in the error logs.  FAILURE: Found AArch64-incompatible binary file. Remove or rebuild the file to be AArch64-compatible. File: linux_x86_64/bin/lib/_cffi_backend.cpython-38-x86_64-linux-gnu.so File: linux_x86_64/bin/lib/_cffi_backend.cpython-38-x86_64-linux-gnu.so FAILURE: Found AArch64-incompatible binary file. Remove or rebuild the file to be AArch64-compatible. File: linux_x86_64/bin/lib/_cffi_backend.cpython-39-x86_64-linux-gnu.so File: linux_x86_64/bin/lib/_cffi_backend.cpython-39-x86_64-linux-gnu.so Could this be an issue with Splunk Appinspect? What would be some possible explanations for why this is happening? 
this how i get the events   { event: { [-] body: null httpMethod: GET path:/data/v1/name queryStringParameters: { identifier: 106 } requestContext: { ... See more...
this how i get the events   { event: { [-] body: null httpMethod: GET path:/data/v1/name queryStringParameters: { identifier: 106 } requestContext: { authorizer: { integrationLatency: 0 principalId: some@example.com } domainName: domain } domainName: domain } resource: /v1/name } msg: data:invoke   { event: { [-] body: null httpMethod: GET path:/data/v1/name queryStringParameters: { identifier: 106 } requestContext: { authorizer: { integrationLatency: 0 principalId: some@example.com } domainName: domain } domainName: domain } resource: /v1/name } msg: data:invoke 2. {    client: same@example.com    domain: domain    entity: name    msg: responseTime    queryParams: {      identifier: 666    }    requestType: GET    responseTime: 114 }   { client: same@example.com domain: domain entity: name msg: responseTime queryParams: { identifier: 666 } requestType: GET responseTime: 114 }    
These are formatted versions of your events, please share the raw unformatted versions of your events (in a code block just like you did with the formatted versions).
You could create combined server/client cert and use it in both environments. Another excellent conf presentation about tls cert https://conf.splunk.com/files/2023/slides/SEC1936B.pdf Also this is n... See more...
You could create combined server/client cert and use it in both environments. Another excellent conf presentation about tls cert https://conf.splunk.com/files/2023/slides/SEC1936B.pdf Also this is nice tool for manage certs https://easy-rsa.readthedocs.io/en/latest/
Its also worth mentioning that the client might need additional configuration to validate the commonName if the DNS name you are connecting with is not the same as the common name on the certificate.... See more...
Its also worth mentioning that the client might need additional configuration to validate the commonName if the DNS name you are connecting with is not the same as the common name on the certificate. @isoutamo The lantern page (https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Securing_the_Splunk_platform_with_TLS) is very useful, Ive got that bookmarked now, thanks The leaf cert that is being used for the web SSL should be sufficient for the TCP Input cert as it is pretty much serving the same purpose (a server cert). Interestingly I have definitely been able to use a server cert in the past as a client certificate, although technically speaking I dont think that should be possible as the server should be checking for "Client Authentication" (OID 1.3.6.1.5.5.7.3.2) attributes. Anyway, @ptrsnk please keep us posted   Will
Hi  I have raw event data in Splunk, where the message contains “data invoke.” Should this message be considered as a count of requests made by a user or writing a query to count an API request when... See more...
Hi  I have raw event data in Splunk, where the message contains “data invoke.” Should this message be considered as a count of requests made by a user or writing a query to count an API request when the path matches a specific query string parameter. My goal is to display the total number of API requests made by any user on a dashboard, filtered by a selected date range. Is this the correct query to achieve that? index= source IN ("") "event" | spath input=_raw output=queryStringParameters path=queryStringParameters | table queryStringParameters | stats count No of request--how to get the total count for a request made based on date range selected below is my splunk log for       { event: { [-] body: null httpMethod: GET path:/data/v1/name queryStringParameters: { identifier: 106 } requestContext: { authorizer: { integrationLatency: 0 principalId: some@example.com } domainName: domain } domainName: domain } resource: /v1/name } msg: data:invoke }   2.Response Time-how to get the total count for a response time  based on date range selected below is the splunk log format I using below query index=* source IN ("*") *responseTime* | fields responseTime | table responseTime,total | addcoltotals labelfield=total label="Total" | search total!="" | fields - total   { client: same@example.com domain: domain entity: name msg: responseTime queryParams: { identifier: 666 } requestType: GET responseTime: 114 }     Should i set SLA based on below formaula or should i also need to add response time  passed sla =(total request -total failed request/total request)X100