All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to import a json file to SPLUNK. It seems that the file is imported into one event but not all of it, it looks like that the file is imported by 10% (or less). Could it be beca... See more...
Hello, I am trying to import a json file to SPLUNK. It seems that the file is imported into one event but not all of it, it looks like that the file is imported by 10% (or less). Could it be because of a configuration that I have to change? the file is of this format     {"resultsPerPage":344,"startIndex":0,"totalResults":344,"format":"NVD_CVE","version":"2.0","timestamp":"2023-02-15T09:42:40.560","vulnerabilities":[{"cve":{"id":"CVE-2013-10012","sourceIdentifier":"cna@vuldb.com","published":"2023-01-16T11:15:10.037","lastModified":"2023-01-24T15:14:10.117","vulnStatus":"Analyzed","descriptions":[{"lang":"en","value":"A vulnerability, which was classified as critical, was found in antonbolling clan7ups. Affected is an unknown function of the component Login\/Session. The manipulation leads to sql injection. The name of the patch is 25afad571c488291033958d845830ba0a1710764. It is recommended to apply a patch to fix this issue. The identifier of this vulnerability is VDB-218388."}],"metrics":{"cvssMetricV31":[{"source":"nvd@nist.gov","type":"Primary","cvssData":{"version":"3.1","vectorString":"CVSS:3.1\/AV:N\/AC:L\/PR:N\/UI:N\/S:U\/C:H\/I:H\/A:H","attackVector":"NETWORK","attackComplexity":"LOW","privilegesRequired":"NONE","userInteraction":"NONE","scope":"UNCHANGED","confidentialityImpact":"HIGH","integrityImpact":"HIGH","availabilityImpact":"HIGH","baseScore":9.8,"baseSeverity":"CRITICAL"},"exploitabilityScore":3.9,"impactScore":5.9}],"cvssMetricV30":[{"source":"cna@vuldb.com","type":"Secondary","cvssData":{"version":"3.0","vectorString":"CVSS:3.0\/AV:A\/AC:L\/PR:L\/UI:N\/S:U\/C:L\/I:L\/A:L","attackVector":"ADJACENT_NETWORK","attackComplexity":"LOW","privilegesRequired":"LOW","userInteraction":"NONE","scope":"UNCHANGED","confidentialityImpact":"LOW","integrityImpact":"LOW","availabilityImpact":"LOW","baseScore":5.5,"baseSeverity":"MEDIUM"},"exploitabilityScore":2.1,"impactScore":3.4}],"cvssMetricV2":[{"source":"cna@vuldb.com","type":"Secondary","cvssData":{"version":"2.0","vectorString":"AV:A\/AC:L\/Au:S\/C:P\/I:P\/A:P","accessVector":"ADJACENT_NETWORK","accessComplexity":"LOW","authentication":"SINGLE","confidentialityImpact":"PARTIAL","integrityImpact":"PARTIAL","availabilityImpact":"PARTIAL","baseScore":5.2},"baseSeverity":"MEDIUM","exploitabilityScore":5.1,"impactScore":6.4,"acInsufInfo":false,"obtainAllPrivilege":false,"obtainUserPrivilege":false,"obtainOtherPrivilege":false,"userInteractionRequired":false}]},"weaknesses":[{"source":"cna@vuldb.com","type":"Primary","description":[{"lang":"en","value":"CWE-89"}]}],"configurations":[{"nodes":[{"operator":"OR","negate":false,"cpeMatch":[{"vulnerable":true,"criteria":"cpe:2.3:a:clan7ups_project:clan7ups:*:*:*:*:*:*:*:*","versionEndExcluding":"2013-02-12","matchCriteriaId":"12D82AEE-3A68-4121-811C-C3462BCEAF25"}]}]}],"references":[{"url":"https:\/\/github.com\/antonbolling\/clan7ups\/commit\/25afad571c488291033958d845830ba0a1710764","source":"cna@vuldb.com","tags":["Patch","Third Party Advisory"]}       I would appreciate any help  Thank you
So i am trying to get a list of inactive splunk users.  I have first tried just grabbing a list of all the users with the last login older than 6 months, but that gives me a list of users that has a... See more...
So i am trying to get a list of inactive splunk users.  I have first tried just grabbing a list of all the users with the last login older than 6 months, but that gives me a list of users that has already been deleted in splunk, like this:   index=_audit action="login attempt" | where strptime('timestamp',"%m-%d-%Y %H:%M:%S")<relative_time(now(),"-6mon") | stats latest(timestamp) by user     Then i tried joining it with a list of the current users from the rest api like this:   | rest /services/authentication/users splunk_server=local | fields realname, title | rename title as user | join user type=left [ search index=_audit action="login attempt" | where strptime('timestamp',"%m-%d-%Y %H:%M:%S")<relative_time(now(),"-6mon") | stats latest(timestamp) by user ]   This doesnt work and just outputs a list of current users. What i want: List of current splunk users with last login attempt older than 6 months with realname username, last login time. I have tried this solution from javiergn, but i cannot get last login time on that https://community.splunk.com/t5/Splunk-Search/How-do-I-edit-my-search-to-identify-inactive-users-over-the-last/m-p/285256
In the Admin classes configuration precedence was defined for index and search time.  However, since the Splunk UF is neither index nor search, what precedence order does the Splunk UF follow?
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN (TCODE="ZORF_BOX_CLOSING") SYUCOMM="SICH_T" ACCOUNT=HRL* | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) | ... See more...
source=PR1 sourcetype="sap:abap" EVENT_TYPE=STAD EVENT_SUBTYPE=MAIN (TCODE="ZORF_BOX_CLOSING") SYUCOMM="SICH_T" ACCOUNT=HRL* | eval RESPTI = round(RESPTI/1000,2), DBCALLTI=round(DBCALLTI/1000,2) | timechart avg(RESPTI) as "Average_Execution_Time" avg(DBCALLTI) as "Average_DB_Time" span=5m | eval Average_Execution_Time = round(Average_Execution_Time,2), Average_DB_Time=round(Average_DB_Time,2) | eventstats | eval UCL='stdev(Average_Execution_Time)'+'mean(Average_Execution_Time)', UCL_DB='stdev(Average_DB_Time)'+'mean(Average_DB_Time)' | eval day_of_week = strftime(_time,"%A") | where day_of_week!= "Saturday" and day_of_week!= "Sunday" | eval New_Field=if(RESPTI >= UCL, 1, 0) | timechart sum(New_Field) span=$span$ This is the search that i am using. I am trying to get a barchart that show the amount of times that the RESPTI goes over the UCL. The problem that i am having is that i cannot compare if RESPTI is bigger than the UCL since it does not want to load in the value. if i try to table it like | table RESPTI, UCL, New_Field then RESPTI will just show up empty.
we have ingested junipet logs as syslogs. trying to create some dashboard for network team Need some dashboard templates for juniper devices log data 
How to configure User experience monitoring for an application, can you provide the steps? Thanks & Regards Anshuman
Hi, I need help to extract a value from field named "message". Field "message" value is as below: The process C:\Windows\system32\winlogon.exe (PRD01) has initiated the power off of computer ... See more...
Hi, I need help to extract a value from field named "message". Field "message" value is as below: The process C:\Windows\system32\winlogon.exe (PRD01) has initiated the power off of computer PC01 on behalf of user ADMIN JABATAN for the following reason: No title for this reason could be found The process C:\Windows\system32\shutdown.exe (PRD01) has initiated the restart of computer PC01 on behalf of user ADMIN\SUPPORT for the following reason: No title for this reason could be found The process C:\Windows\system32\shutdown.exe (PRD01) has initiated the restart of computer PC01 on behalf of user admin for the following reason: No title for this reason could be found The value i want to extract is: newField ADMIN JABATAN ADMIN\SUPPORT admin   Please assist. Thanks.
Hello Splunkers! I'm trying to take a backup of a lookup file(file.csv) and create a backup file(file_backup.csv) and schedule the search on daily basis, the below query will only run and overwrite ... See more...
Hello Splunkers! I'm trying to take a backup of a lookup file(file.csv) and create a backup file(file_backup.csv) and schedule the search on daily basis, the below query will only run and overwrite the old backup file but I want the scheduled search to run only when the new entries are added to the file.csv. |inputlookup file.csv |outputlookup file_backup.csv Also, I want to add 2 new columns (user who edited the lookup and time when it was edited) in the backup lookup  Original file: file.csv column1 column2  Backup file file_backup.csv generated using the scheduled search should have the below  column1 column2 time user  Any thoughts please?   Cheers!
Kindly provide me the solution for the below, Suppose I have created 5 health rules, so I can check the violated health rules in 'Violations & Anomalies' tab on controller. Here my question is how w... See more...
Kindly provide me the solution for the below, Suppose I have created 5 health rules, so I can check the violated health rules in 'Violations & Anomalies' tab on controller. Here my question is how will i get the exact count of particular health rule violated for the specified time period... i want to know that how many times the health rules violated for custom time.
Currently, I am trying to extract the DNS logs from TA_Windows where inputs.conf file has [WinEventLog: //DNS Server) disabled=0 but still not working. I am trying to get DNS logs to index (microsoft... See more...
Currently, I am trying to extract the DNS logs from TA_Windows where inputs.conf file has [WinEventLog: //DNS Server) disabled=0 but still not working. I am trying to get DNS logs to index (microsoft_windows) ion indexer. I have DNS server role installed on the machine. UF is also installed but still not working. I have seen many other blogs but not exactly pointing out the solution. Any help will be appreciated. Thanks    
When trying to deploy from https://github.com/aws-quickstart/quickstart-splunk-enterprise, I am unable to get past the SplunkCM EC2 instance deployment. The error being: Failed to receive 1 resource ... See more...
When trying to deploy from https://github.com/aws-quickstart/quickstart-splunk-enterprise, I am unable to get past the SplunkCM EC2 instance deployment. The error being: Failed to receive 1 resource signal(s) within the specified duration. I have tried to follow the steps here: https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-failed-signal/ The instance appears to be successfully created in EC2, but I am unable to ssh into the instance to view if the cfn-signal scripts are successfully deployed, as this seems to be the likely issue here. Any help would be much appreciated. 
Hi all, First time posting here so please be patient and I am relatively new to the Splunk environment, but I am struggling to figure out this search function. My manager has asked me to create ... See more...
Hi all, First time posting here so please be patient and I am relatively new to the Splunk environment, but I am struggling to figure out this search function. My manager has asked me to create an alert for Load Balancers flapping on our server. Criteria; - Runs every 15 mins (I assume this can be set in the "alert" settings) - Fires if a load balancer switches from Up to Down and Back more than 5 times This second point I am struggling to work out - this is what I have so far;         index=xxx sourcetype="xxx" host="xxx" (State=UP OR State=DOWN) State="*" | stats count by State | eval state_status = if(DOWN+UP == 5, "Problem", "OK") | stats count by state_status           Note - "State" is the field in question as it stores the UP/DOWN events which have values. Based on this, I can get an individual count of when the load balancer displayed UP and when it displayed DOWN, however I need to turn this into a threshold search to only display a count of how many times it changed from UP to DOWN 5x consecutive times. Any and all help will be much appreciated.
Following query is printing 'pp_user_action_name','Total_Calls','Avg_User_Action_Response' not getting 'pp_user_action_user' values as its outside of useractions{} array. Not able club values from in... See more...
Following query is printing 'pp_user_action_name','Total_Calls','Avg_User_Action_Response' not getting 'pp_user_action_user' values as its outside of useractions{} array. Not able club values from inner array and outer array. How to fix this? index="dynatrace" sourcetype="dynatrace:usersession" | spath output=pp_user_action_user path=userId | search pp_user_action_user ="xxxx,xxxx" | spath output=user_actions path="userActions{}" | stats count by user_actions  | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="xxxxx" | spath output=pp_user_action_name input=user_actions path=name | spath output=pp_user_action_targetUrl input=user_actions path=targetUrl | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval pp_user_action_name=substr(pp_user_action_name,0,150) | stats count(pp_user_action_response) As "Total_Calls" ,avg(pp_user_action_response) AS "Avg_User_Action_Response" by pp_user_action_name | eval Avg_User_Action_Response=round(Avg_User_Action_Response,0) | table pp_user_action_user,pp_user_action_name,Total_Calls,Avg_User_Action_Response | sort -Total_Calls PFB sample event.  [-] applicationType: WEB_APPLICATION bounce: false browserFamily: MicrosoftEdge browserMajorVersion: MicrosoftEdge108 browserType: DesktopBrowser clientType: DesktopBrowser connectionType: UNKNOWN dateProperties: [ [+] ] displayResolution: FHD doubleProperties: [ [+] ] duration: 279730 endReason: TIMEOUT endTime: 1676486021319 errors: [ [+] ] events: [ [+] ] hasError: true hasSessionReplay: false internalUserId: xxxxx ip: xxxxx longProperties: [ [+] ] matchingConversionGoals: [ [+] ] matchingConversionGoalsCount: 0 newUser: true numberOfRageClicks: 0 numberOfRageTaps: 0 osFamily: Windows osVersion: Windows10 partNumber: 0 screenHeight: 1080 screenOrientation: LANDSCAPE screenWidth: 1920 startTime: 1676485741589 stringProperties: [ [+] ] syntheticEvents: [ [+] ] tenantId: xxxx totalErrorCount: 3 totalLicenseCreditCount: 1 userActionCount: 12 userActions: [ [-] { [-] apdexCategory: FRUSTRATED application: xxxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: null customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: null domCompleteTime: null domContentLoadedTime: null domain: xxxxx doubleProperties: [ [+] ] duration: 16292 endTime: 1676485757881 firstInputDelay: null firstPartyBusyTime: 15012 firstPartyResources: 2 frontendTime: 1289 internalApplicationId: xxxxx javascriptErrorCount: 0 keyUserAction: false largestContentfulPaint: null loadEventEnd: null loadEventStart: null longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: clickontasknamexxxxx navigationStart: 1676485742474 networkTime: 1881 requestErrorCount: 0 requestStart: 1175 responseEnd: 15003 responseStart: 14297 serverTime: 13122 speedIndex: 16292 startTime: 1676485741589 stringProperties: [ [+] ] targetUrl: xxxx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Xhr userActionPropertyCount: 0 visuallyCompleteTime: 16292 } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } { [+] } ] userExperienceScore: TOLERATED userId: xxxxx,xxxx userSessionId: xxxxx userType: REAL_USER }
Anybody here running Splunk Enterprise on an IBM E950 or similar IBM POWER8 or POWER9 CPU based servers with a Linux kernel?
I have a lookup with multiple columns (keys).  Some combinations make a unique match, but I need an ambiguous search on a single key to return all matched items of a particular field.  In a simplifie... See more...
I have a lookup with multiple columns (keys).  Some combinations make a unique match, but I need an ambiguous search on a single key to return all matched items of a particular field.  In a simplified form, the lookup is like this QID IP Detected 12345 127.0.0.1 2022-12-10 45678 127.0.0.1 2023-01-21 12345 127.0.0.2 2023-01-01 45678 127.0.0.2 2022-12-15 23456 ... ... QID and IP determines a unique Detected value; you can say the combination is a primary key.  No problem with search by primary key.  My requirement is to search by QID alone.  For 12345, for example, I expect the return to be multivalued (2022-12-10, 2023-01-01). If I hard code QID in an emulation, that's exactly what I get.     | makeresults | eval QID=12345 | lookup mylookup QID | table QID Detected     This will give me QID Detected 12345 2022-12-10 2023-01-01 But if use the same lookup in a search, e.g.,   index=myindex QID=12345 | stats count by QID ``` result is the same whether or not stats precedes lookup ``` | lookup mylookup QID | table QID Detected   the result is blank QID Detected 12345   The behavior can be more complex if the search returns more than one QID (e.g., QID IN (12345, 45678)).  Sometimes one of them will get Detected populated, but not others. How can I make sure multiple matches are all returned?
Hi All, I need to re-import new XML metaddata to the Splunk Cloud SAML Configuration which is generated for Azure SSO users. The current cert is valid until 19/02/2023. The issue is when I try to im... See more...
Hi All, I need to re-import new XML metaddata to the Splunk Cloud SAML Configuration which is generated for Azure SSO users. The current cert is valid until 19/02/2023. The issue is when I try to import the new xml (federationmetadata.xml) into the SAML configuration in the Splunk It constantly encounters the error “There are multiple cert,idepCertPath,idpCert.pem, must be directory" Try to remove the idpCert.pem in the ./etc/auth/idpCerts/idpCert.pem, and shows Server Error. I don't know how I can find the path ( ./etc/auth/idpCerts/idpCert.pem) in the Splunk cloud as it is not on=premises. I really need your help as the current valid will expired very soon (19/02/2023)and results in users and admins being locked out of Splunk Cloud. Any way to fix it. """urgent to solve""" Many thanks, Goli @tlam_splunk @gcusello  I would greatly appreciate it if anyone could help me!  
Unfortunately I have no control over the log data formatting... it is in format:  Field1=Value1|Field2=Value2| ... |Criteria=one,two,three,99.0|... I have one field, Criteria, that has many value... See more...
Unfortunately I have no control over the log data formatting... it is in format:  Field1=Value1|Field2=Value2| ... |Criteria=one,two,three,99.0|... I have one field, Criteria, that has many values with embedded commas. Splunk search only give me the first value... I want all values treated as one in a stats count by I tried below to rewrite them, and do see the changes, but stats still getting only first value. index=myidx  Msg=mymsg  |  rex mode=sed field=_raw "s/,/-/g" | bucket span=1d _time as ts | eval ts=strftime(ts,"%Y-%m-%d") | stats count by ts Criteria  
Hi, I am using a regex to search for a field "statusCode" which could have multiple values, i.e. "200", "400", "500", etc....  I am attempting to create an Interesting Field "statusCode" and have i... See more...
Hi, I am using a regex to search for a field "statusCode" which could have multiple values, i.e. "200", "400", "500", etc....  I am attempting to create an Interesting Field "statusCode" and have it sorted by different statusCode values. I am  trying to do perform a search using the following:     \\Sample Query index=myCoolIndex cluster_name="myCoolCluster" sourcetype=myCoolSourceType label_app=myCoolAppName ("\"statusCode\"") | rex field=_raw \"statusCode\"\s:\s\"?(?<statusCode>2\d{2}|4\d{2}|5\d{2})\"? \\Sample Log (Looks like JSON object, but its a string): "{ "correlationId" : "", "message" : "", "tracePoint" : "", "priority" : "", "category" : "", "elapsed" : 0, "locationInfo" : { "lineInFile" : "", "component" : "", "fileName" : "", "rootContainer" : "" }, "timestamp" : "", "content" : { "message" : "", "originalError" : { "statusCode" : "200", "errorPayload" : { "error" : "" } }, "standardizedError" : { "statusCode" : "400", "errorPayload" : { "errors" : [ { "error" : { "traceId" : "", "errorCode" : "", "errorDescription" : "", "errorDetails" : "" } } ] } }, "standardizedError" : { "statusCode" : "500", "errorPayload" : { "errors" : [ { "error" : { "traceId" : "", "errorCode" : "", "errorDescription" : "" "errorDetails" : "" } } ] } } }, }"     Using online regex tools and a sample output of a log I have confirmed the regEx works outside of a Splunk query.  I have also gone through numerous Splunk community threads where I have tried different permutations based on suggestions with no luck.  Any help would be appreciated.  
Hola comunidad  I'm trying to configure the props file so that the following event starts from the third line: Currently, I am testing as follows:   If I leave this setting, the tim... See more...
Hola comunidad  I'm trying to configure the props file so that the following event starts from the third line: Currently, I am testing as follows:   If I leave this setting, the timestamp of the first few lines will be taken from splunk, but it should take the timestamp of the lines with date. Regards  
Is there a way in Splunk to determine how a user arrived at a destination IP? Did they click a link from a certain webpage, or did they go there directly? Another way to look at it is if there is a... See more...
Is there a way in Splunk to determine how a user arrived at a destination IP? Did they click a link from a certain webpage, or did they go there directly? Another way to look at it is if there is a way to separate user activity from webpage activity. Websites automatically load advertisements and other content automatically within a second, or a very small time interval. Users on the other hand are scrolling, clicking on a link, then clicking on another link which takes a significantly longer amount of time. Being able to consolidate web page activity where dozens of destination addresses are accessed within 5 seconds into a single event where just the first record is shown would help to reduce the number of results returned when you're looking at a time window containing several thousand records.