All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

When trying to deploy from https://github.com/aws-quickstart/quickstart-splunk-enterprise, I am unable to get past the SplunkCM EC2 instance deployment. The error being: Failed to receive 1 resource ... See more...
When trying to deploy from https://github.com/aws-quickstart/quickstart-splunk-enterprise, I am unable to get past the SplunkCM EC2 instance deployment. The error being: Failed to receive 1 resource signal(s) within the specified duration. I have tried to follow the steps here: https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-failed-signal/ The instance appears to be successfully created in EC2, but I am unable to ssh into the instance to view if the cfn-signal scripts are successfully deployed, as this seems to be the likely issue here. Any help would be much appreciated. 
Hi all, First time posting here so please be patient and I am relatively new to the Splunk environment, but I am struggling to figure out this search function. My manager has asked me to create ... See more...
Hi all, First time posting here so please be patient and I am relatively new to the Splunk environment, but I am struggling to figure out this search function. My manager has asked me to create an alert for Load Balancers flapping on our server. Criteria; - Runs every 15 mins (I assume this can be set in the "alert" settings) - Fires if a load balancer switches from Up to Down and Back more than 5 times This second point I am struggling to work out - this is what I have so far;         index=xxx sourcetype="xxx" host="xxx" (State=UP OR State=DOWN) State="*" | stats count by State | eval state_status = if(DOWN+UP == 5, "Problem", "OK") | stats count by state_status           Note - "State" is the field in question as it stores the UP/DOWN events which have values. Based on this, I can get an individual count of when the load balancer displayed UP and when it displayed DOWN, however I need to turn this into a threshold search to only display a count of how many times it changed from UP to DOWN 5x consecutive times. Any and all help will be much appreciated.
Following query is printing 'pp_user_action_name','Total_Calls','Avg_User_Action_Response' not getting 'pp_user_action_user' values as its outside of useractions{} array. Not able club values from in... See more...
Following query is printing 'pp_user_action_name','Total_Calls','Avg_User_Action_Response' not getting 'pp_user_action_user' values as its outside of useractions{} array. Not able club values from inner array and outer array. How to fix this? index="dynatrace" sourcetype="dynatrace:usersession" | spath output=pp_user_action_user path=userId | search pp_user_action_user ="xxxx,xxxx" | spath output=user_actions path="userActions{}" | stats count by user_actions  | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="xxxxx" | spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_targetUrl input=user_actions path=targetUrl | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | stats count(pp_user_action_response) As "Total_Calls" ,avg(pp_user_action_response) AS "Avg_User_Action_Response" by pp_user_action_name | eval Avg_User_Action_Response=round(Avg_User_Action_Response,0) | table pp_user_action_user,pp_user_action_name,Total_Calls,Avg_User_Action_Response | sort -Total_Calls PFB sample event.  [-] applicationType: WEB_APPLICATION bounce: false browserFamily: MicrosoftEdge browserMajorVersion: MicrosoftEdge108 browserType: DesktopBrowser clientType: DesktopBrowser connectionType: UNKNOWN dateProperties: [ [+] ] displayResolution: FHD doubleProperties: [ [+] ] duration: 279730 endReason: TIMEOUT endTime: 1676486021319 errors: [ [+] ] events: [ [+] ] hasError: true hasSessionReplay: false internalUserId: xxxxx ip: xxxxx longProperties: [ [+] ] matchingConversionGoals: [ [+] ] matchingConversionGoalsCount: 0 newUser: true numberOfRageClicks: 0 numberOfRageTaps: 0 osFamily: Windows osVersion: Windows10 partNumber: 0 screenHeight: 1080 screenOrientation: LANDSCAPE screenWidth: 1920 startTime: 1676485741589 stringProperties: [ [+] ] syntheticEvents: [ [+] ] tenantId: xxxx totalErrorCount: 3 totalLicenseCreditCount: 1 userActionCount: 12 userActions: [ [-] { [-] apdexCategory: FRUSTRATED application: xxxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: null customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: null domCompleteTime: null domContentLoadedTime: null domain: xxxxx doubleProperties: [ [+] ] duration: 16292 endTime: 1676485757881 firstInputDelay: null firstPartyBusyTime: 15012 firstPartyResources: 2 frontendTime: 1289 internalApplicationId: xxxxx javascriptErrorCount: 0 keyUserAction: false largestContentfulPaint: null loadEventEnd: null loadEventStart: null longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: clickontasknamexxxxx navigationStart: 1676485742474 networkTime: 1881 requestErrorCount: 0 requestStart: 1175 responseEnd: 15003 responseStart: 14297 serverTime: 13122 speedIndex: 16292 startTime: 1676485741589 stringProperties: [ [+] ] targetUrl: xxxx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Xhr userActionPropertyCount: 0 visuallyCompleteTime: 16292 } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] userExperienceScore: TOLERATED userId: xxxxx,xxxx userSessionId: xxxxx userType: REAL_USER }
Anybody here running Splunk Enterprise on an IBM E950 or similar IBM POWER8 or POWER9 CPU based servers with a Linux kernel?
I have a lookup with multiple columns (keys).  Some combinations make a unique match, but I need an ambiguous search on a single key to return all matched items of a particular field.  In a simplifie... See more...
I have a lookup with multiple columns (keys).  Some combinations make a unique match, but I need an ambiguous search on a single key to return all matched items of a particular field.  In a simplified form, the lookup is like this QID IP Detected 12345 127.0.0.1 2022-12-10 45678 127.0.0.1 2023-01-21 12345 127.0.0.2 2023-01-01 45678 127.0.0.2 2022-12-15 23456 ... ... QID and IP determines a unique Detected value; you can say the combination is a primary key.  No problem with search by primary key.  My requirement is to search by QID alone.  For 12345, for example, I expect the return to be multivalued (2022-12-10, 2023-01-01). If I hard code QID in an emulation, that's exactly what I get.     | makeresults | eval QID=12345 | lookup mylookup QID | table QID Detected     This will give me QID Detected 12345 2022-12-10 2023-01-01 But if use the same lookup in a search, e.g.,   index=myindex QID=12345 | stats count by QID ``` result is the same whether or not stats precedes lookup ``` | lookup mylookup QID | table QID Detected   the result is blank QID Detected 12345   The behavior can be more complex if the search returns more than one QID (e.g., QID IN (12345, 45678)).  Sometimes one of them will get Detected populated, but not others. How can I make sure multiple matches are all returned?
Hi All, I need to re-import new XML metaddata to the Splunk Cloud SAML Configuration which is generated for Azure SSO users. The current cert is valid until 19/02/2023. The issue is when I try to im... See more...
Hi All, I need to re-import new XML metaddata to the Splunk Cloud SAML Configuration which is generated for Azure SSO users. The current cert is valid until 19/02/2023. The issue is when I try to import the new xml (federationmetadata.xml) into the SAML configuration in the Splunk It constantly encounters the error “There are multiple cert,idepCertPath,idpCert.pem, must be directory" Try to remove the idpCert.pem in the ./etc/auth/idpCerts/idpCert.pem, and shows Server Error. I don't know how I can find the path ( ./etc/auth/idpCerts/idpCert.pem) in the Splunk cloud as it is not on=premises. I really need your help as the current valid will expired very soon (19/02/2023)and results in users and admins being locked out of Splunk Cloud. Any way to fix it. """urgent to solve""" Many thanks, Goli @tlam_splunk @gcusello  I would greatly appreciate it if anyone could help me!  
Unfortunately I have no control over the log data formatting... it is in format:  Field1=Value1|Field2=Value2| ... |Criteria=one,two,three,99.0|... I have one field, Criteria, that has many value... See more...
Unfortunately I have no control over the log data formatting... it is in format:  Field1=Value1|Field2=Value2| ... |Criteria=one,two,three,99.0|... I have one field, Criteria, that has many values with embedded commas. Splunk search only give me the first value... I want all values treated as one in a stats count by I tried below to rewrite them, and do see the changes, but stats still getting only first value. index=myidx  Msg=mymsg  |  rex mode=sed field=_raw "s/,/-/g" | bucket span=1d _time as ts | eval ts=strftime(ts,"%Y-%m-%d") | stats count by ts Criteria  
Hi, I am using a regex to search for a field "statusCode" which could have multiple values, i.e. "200", "400", "500", etc....  I am attempting to create an Interesting Field "statusCode" and have i... See more...
Hi, I am using a regex to search for a field "statusCode" which could have multiple values, i.e. "200", "400", "500", etc....  I am attempting to create an Interesting Field "statusCode" and have it sorted by different statusCode values. I am  trying to do perform a search using the following:     \\Sample Query index=myCoolIndex cluster_name="myCoolCluster" sourcetype=myCoolSourceType label_app=myCoolAppName ("\"statusCode\"") | rex field=_raw \"statusCode\"\s:\s\"?(?<statusCode>2\d{2}|4\d{2}|5\d{2})\"? \\Sample Log (Looks like JSON object, but its a string): "{ "correlationId" : "", "message" : "", "tracePoint" : "", "priority" : "", "category" : "", "elapsed" : 0, "locationInfo" : { "lineInFile" : "", "component" : "", "fileName" : "", "rootContainer" : "" }, "timestamp" : "", "content" : { "message" : "", "originalError" : { "statusCode" : "200", "errorPayload" : { "error" : "" } }, "standardizedError" : { "statusCode" : "400", "errorPayload" : { "errors" : [ { "error" : { "traceId" : "", "errorCode" : "", "errorDescription" : "", "errorDetails" : "" } } ] } }, "standardizedError" : { "statusCode" : "500", "errorPayload" : { "errors" : [ { "error" : { "traceId" : "", "errorCode" : "", "errorDescription" : "" "errorDetails" : "" } } ] } } }, }"     Using online regex tools and a sample output of a log I have confirmed the regEx works outside of a Splunk query.  I have also gone through numerous Splunk community threads where I have tried different permutations based on suggestions with no luck.  Any help would be appreciated.  
Hola comunidad  I'm trying to configure the props file so that the following event starts from the third line: Currently, I am testing as follows:   If I leave this setting, the tim... See more...
Hola comunidad  I'm trying to configure the props file so that the following event starts from the third line: Currently, I am testing as follows:   If I leave this setting, the timestamp of the first few lines will be taken from splunk, but it should take the timestamp of the lines with date. Regards  
Is there a way in Splunk to determine how a user arrived at a destination IP? Did they click a link from a certain webpage, or did they go there directly? Another way to look at it is if there is a... See more...
Is there a way in Splunk to determine how a user arrived at a destination IP? Did they click a link from a certain webpage, or did they go there directly? Another way to look at it is if there is a way to separate user activity from webpage activity. Websites automatically load advertisements and other content automatically within a second, or a very small time interval. Users on the other hand are scrolling, clicking on a link, then clicking on another link which takes a significantly longer amount of time. Being able to consolidate web page activity where dozens of destination addresses are accessed within 5 seconds into a single event where just the first record is shown would help to reduce the number of results returned when you're looking at a time window containing several thousand records.
It's making me crazy!!! Splunk Enterprise 8.2.6, Cluster SH with 3 members.     [role_test] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 grantableRoles = test importR... See more...
It's making me crazy!!! Splunk Enterprise 8.2.6, Cluster SH with 3 members.     [role_test] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 grantableRoles = test importRoles = user srchIndexesAllowed = *;_* srchMaxTime = 8640000     A "test" new Role. Import capabilities from "user" Role. A new user is assigner to the "test" Role.       No way to query _internal indexes!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Any suggestion??? Thanks.
Setup an app folder on my search head (clustered with indexers and HECS)  "TA-Whatever"  from the app builder. Dropped a  py script in the default folder inside TA-whatever that gens a json file that... See more...
Setup an app folder on my search head (clustered with indexers and HECS)  "TA-Whatever"  from the app builder. Dropped a  py script in the default folder inside TA-whatever that gens a json file that gets dropped in the parent app TA-whatever folder.  When going to the add data, file and directory menu, I wen thru the server file system drop downs and selected the  json file and on the 2nd page where you select the sourcetype it sees the  json data.  After I select my index, source type and all that, finish, restart Splunk, and search the index,  I see nothing, no data is there.   I have rebuilt the app several times as the Splunk user,  as root.  chmod everything to 777. rebuild source types and index's. did props conf, treid with no entry in props.  added inputs conf to the local app folder, tried with no input.conf in the ta-whatever local folder,  also added a monitor to the global inputs.conf in etc/sys/local and still no dice.   here is my input.conf that i tried in global and the local app directory: [monitor://tmp/felt.json] disabled = 0 index = googlepyscript sourcetype = googlepy   also tried:   [monitor:///tmp/felt.json] disabled = 0 index = googlepyscript sourcetype = googlepy     No matter what I do the index is not getting data. I have tried it with the build in _json sourcetype and created my own and no data goes to the index after I finish the wizard.   Any input is welcomed at this point as I have been going at it for several days. Thanks! 
I'm trying to create a search that shows a daily message count (both inbound and outobound) and the average for each direction. Although it doesn't give me any errors, when the table gets created, th... See more...
I'm trying to create a search that shows a daily message count (both inbound and outobound) and the average for each direction. Although it doesn't give me any errors, when the table gets created, the results show as zero (I know this is inaccurate as I pulled a message trace from o365 to confirm).   index=vs_email sampledesign@victoriasecret.com | eval direction=case(RecipientAddress="sampledesign@victoriasecret.com", "inbound", RecipientAddress!="sampledesign@victoriasecret.com", "outbound") | dedup MessageId | bin _time span=1d | eventstats count(direction="inbound") as inbound_count | eventstats count(direction="outbound") as outbound_count | dedup _time | eventstats avg(inbound_count) as average_inbound_count | eventstats avg(outbound_count) as average_outbound_count | table inbound_count outbound_count average_inbound_count average_outbound_count   All of the results are showing as zero. Any help would be much appreciated. Thanks!
Hi, Please, Can some one let me know what is the file and variable in "Splunk Add-on AWS" for S3, that limits the ingestion of files to 1 hour? I didn't find in inputs.conf file any variable that l... See more...
Hi, Please, Can some one let me know what is the file and variable in "Splunk Add-on AWS" for S3, that limits the ingestion of files to 1 hour? I didn't find in inputs.conf file any variable that limits the ingestion of files to 1 hour. We need to index older files from S3 bucket but "Splunk Add-on AWS" only let index the last hour. This is the inputs.conf file [aws_s3://cloud-logs] aws_account = abc aws_s3_region = us-east-1 bucket_name = f-logs character_set = auto ct_blacklist = ^$ host_name = s3.us-east-1.amazonaws.com index = cloud initial_scan_datetime = 2022-01-14T15:59:18Z max_items = 100000 max_retries = 3 polling_interval = 300 private_endpoint_enabled = 0 recursion_depth = -1 sourcetype = cloud:json disabled = 0 Regards Edgard Patino      
Am using scheduled alerts , I notice not all alerts are getting fired and am not receiving emails for all the events. Around 45-50% for alerts I get, rest of them I got get email.   Any reason wh... See more...
Am using scheduled alerts , I notice not all alerts are getting fired and am not receiving emails for all the events. Around 45-50% for alerts I get, rest of them I got get email.   Any reason why this weird behaviour ?
Good Afternoon I wanted to reach out to the community for some assistance/clarification  on the best approach to change a search head with 2 index peers to a new windows server? We currently have... See more...
Good Afternoon I wanted to reach out to the community for some assistance/clarification  on the best approach to change a search head with 2 index peers to a new windows server? We currently have one search head acting as the license master with two index peers that it is connected to.  The search head server is being decommissioned so I will need to swap out the search head and license master to the new server.  Is there a recommended best approach to this?    I found the following information: How to migrate When you migrate on *nix systems, you can extract the tar file you downloaded directly over the copied files on the new system, or use your package manager to upgrade using the downloaded package. On Windows systems, the installer updates the Splunk files automatically. Stop Splunk Enterprise services on the host from which you want to migrate. Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Copying this directory also copies the mongo subdirectory. Install Splunk Enterprise on the new host. Verify that the index configuration (indexes.conf) file's volume, sizing, and path settings are still valid on the new host. Start Splunk Enterprise on the new instance. Log into Splunk Enterprise with your existing credentials. After you log in, confirm that your data is intact by searching it. here: https://docs.splunk.com/Documentation/Splunk/7.2.3/Installation/MigrateaSplunkinstance?_ga=2.79344788.1455755630.1676483895-73379229.1676483895&_gl=1*xs5dk1*_ga*NzMzNzkyMjkuMTY3NjQ4Mzg5NQ..*_ga_5EPM2P39FV*MTY3NjQ4Mzg5NS4xLjEuMTY3NjQ4NDUzNi4yOC4wLjA. But this seems a little to simple.  I am unable to keep the same server name and IP address that the search head has now.  So that got me thinking this maybe simple but I just wanted to ensure I am not missing a critical step. Thank you Dan  
I have a saved report with three output fields that I want to add to a Column chart in Dashboard studio. Two of the three fields contain static values (license limit and optimal utilization threshold... See more...
I have a saved report with three output fields that I want to add to a Column chart in Dashboard studio. Two of the three fields contain static values (license limit and optimal utilization threshold) that I want to add as overlays to the third utilized SVC field. I can't seem to get the JSON correct. This is as close as I have come. How can I add two fields as overlays in a Column chart? Image attached.     { "type": "splunk.column", "dataSources": { "primary": "ds_search_1" }, "title": "SVC License Usage (today)", "options": { "yAxisAbbreviation": "off", "y2AxisAbbreviation": "off", "showRoundedY2AxisLabels": false, "legendTruncation": "ellipsisMiddle", "showY2MajorGridLines": true, "xAxisTitleVisibility": "hide", "yAxisTitleText": "SVC Usage", "overlayFields": ["optimal utilization threshold", "license limit"], "columnGrouping": "overlay" }, "context": {}, "showProgressBar": false, "showLastUpdated": false }      
I received an error stating "This saved search cannot perform summary indexing because it has a malformed search." while I was setting up summary indexing through the UI.  The SPL in my saved search... See more...
I received an error stating "This saved search cannot perform summary indexing because it has a malformed search." while I was setting up summary indexing through the UI.  The SPL in my saved search included a lookup and a subsearch to dynamically set the earliest and latest values for the main search. From what I found researching the error, the issue is related to passing the earliest and latest values back to the main search. It took me a while to solve this so I thought I'd post it here to help anyone else seeing this error.  
Hello, I have created a modular input using the example of splunk-app-example. It extends the class Script and I modified the get_scheme function adding arguments (example) key_secret = Argument(... See more...
Hello, I have created a modular input using the example of splunk-app-example. It extends the class Script and I modified the get_scheme function adding arguments (example) key_secret = Argument("key_secret") key_secret.title = "Key Secret" key_secret.data_type = Argument.data_type_string key_secret.required_on_create = True   This code allows to save key_secret as a plain string, which is clearly unsecure. Investigating I reached to the storage_password endpoints, and I added the following to stream_events method: if key_secret != "": secrets = self.service.storage_passwords storage_passwords = self.service.storage_passwords storage_password = storage_passwords.create(key_secret, key_id, tenant) input_item.update({"key_secret": ""}) else: key_secret = next(secret for secret in secrets if (secret.realm == tenant and secret.username == key_id)).clear_password This is not working, as I cannot modify the input definition, is storing both in storage_passwords and in inputs.conf. Is there any way in code to delete the inputs.conf password, or what is the correct way to manage this? Thanks!    
Hi Splunkers, I have developed the custom Add On one one server on which its working fine , when i have exported on another server, its giving the below error. i have gone through the many articles... See more...
Hi Splunkers, I have developed the custom Add On one one server on which its working fine , when i have exported on another server, its giving the below error. i have gone through the many articles on community and found its somehow relate to password.conf file. mean since its built on different server so it had encrypted password with their splunk.secret and kvstore. now please help me out how can I resolve issue on here another server and update password.conf or bind with this another server.    ERROR PersistentScript [23354 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk-add-on-for-nimsoft/bin/splunk_add_on_for_nimsoft_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last):   Thanks in advance,