All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Friends, I have query input in DB connect app, which runs collect data from oracle database. Database generates 800 records per second . I'm using "Rising mode" with SEQUENSE_NO as Rising col... See more...
Hi Friends, I have query input in DB connect app, which runs collect data from oracle database. Database generates 800 records per second . I'm using "Rising mode" with SEQUENSE_NO as Rising column. SEQUENSE_NO is unique and ascending column. My query look like below: SELECT * FROM table WHERE SEQUENSE_NO > ? AND tnx_stamp > sysdate - 10/1440 ORDER BY SEQUENSE_NO ASC My Input setting: Max row to Retrieve: 0 (Unlimited ) Fetch Size: 1000 Or 100000 Execution Frequency: 120s This input work fine, no Error Or waring message in _internal log. Next, I compared number of record between Oracle and Splunk in a time range. Its difference. In Oracle 140,425 records, Splunk less records than Oracle with 135,008 events (I have tried with many different range of time, it same result.   Next, I search count timechart span=1s. --> Data loss occurs periodically Next step. I checked internal dbx_job_metrict log.  The time of data loss coincides with the job start_time. I dont know why it happened.   Would appreciate any help figuring out how to resolve this. Thanks!  
I have a field "Date" as below. However, there are some inconsistency in the date format.  How can I get the "30/1/2021" and change it to "1/30/2021" following the rest of the date format?   Dat... See more...
I have a field "Date" as below. However, there are some inconsistency in the date format.  How can I get the "30/1/2021" and change it to "1/30/2021" following the rest of the date format?   Date 4/16/2021 3/31/2021 2/28/2021 30/1/2021 2/13/2021
Hello, I have a single value trellis panel which will show the maximum counts of errors for each car i have. These maximum error counts correspond to a location (called name on the table on the righ... See more...
Hello, I have a single value trellis panel which will show the maximum counts of errors for each car i have. These maximum error counts correspond to a location (called name on the table on the right to show as an example (encoded as numbers)). How should i create a dynamic hovering tool tip such that as my user hover their mouse over the values on the single value panel, a tooltip with the corresponding location will appear? In the image below, i would like the tooltip to contain the corresponding location "name: 8"  when the user mouses over the 21 and 19 in the single value panel. Image below as an example. thank you!   
what is the quickest way to list files that exit on index. I am use this spl command usually but it take long time specially if index size is huge! index="my-index" | dedup source | table source ... See more...
what is the quickest way to list files that exit on index. I am use this spl command usually but it take long time specially if index size is huge! index="my-index" | dedup source | table source any idea? Thanks
I want to add a link in the email alert like: click here for more information.  Instead of writing the complete url, I just want "click here" as a hyperlink. Can anyone suggest me a way to do this?
I get daily logs for some files (f1,f2,f3,f4,f5). Now, If on someday I get only files f1, f2, and f4 then I want to make a table containing the missed files like below and make an alert out of it. ... See more...
I get daily logs for some files (f1,f2,f3,f4,f5). Now, If on someday I get only files f1, f2, and f4 then I want to make a table containing the missed files like below and make an alert out of it. Not_Received f3 f5   Please suggest to me a way create this table.
Hi, want to achieve daily,weekly ,monthly,  yearly report empDirectory.csv contains Employee ID,Employee  Name, Manager ,ManagerID one.CSV contains Date1 and EMP_ID1 Two.csv contains Date2 and EM... See more...
Hi, want to achieve daily,weekly ,monthly,  yearly report empDirectory.csv contains Employee ID,Employee  Name, Manager ,ManagerID one.CSV contains Date1 and EMP_ID1 Two.csv contains Date2 and EMP_ID2 want to compare employee from Two.csv  is present in one.csv on particular date.  below is my query.  | inputlookup one.CSV |dedup EMP_ID1,Date |lookup empDirectory.csv EMP_ID as EMP_ID1 |search ManagerID=Manager1 |table Date1,EMP_ID1,ManagerID |join [ inputlookup two.CSV |dedup EMP_ID2, Date2 |lookup empDirectory.csv EMP_ID as EMP_ID2 |search ManagerID=Manager1 |table Date2,EMP_ID2] |table Date1,EMP_ID1,ManagerID,Date2,EMP_ID2 |eval GoodEMP=if(EMP_ID1=EMP_ID2, "Good", "NotGood") |search GoodEMP=GoodEMP |table GoodEMP,Employee ID,Employee  Name, Manager ,ManagerID extending above query to timechart |timechart count(GoodEMP) as GoodEMP by Date2 expected result Total number of GoodEMP per day under manager  want to create monthly,weekly, yearly graph 
Hi, We have a requirement. I need to manually upload a fake HTTP request, record in Network Requests. So I implement a manual HTTP request follow this documentation, the HTTP request which auto... See more...
Hi, We have a requirement. I need to manually upload a fake HTTP request, record in Network Requests. So I implement a manual HTTP request follow this documentation, the HTTP request which automatically detects were recorded in Network Requests or Analyze or Snapshots. But I can't search the record which I manually set. I check the AppDynamics log found beacon added and Agent sent my beacon with 200 response  Question: What else I need to implement or configure What's the limitation of URL setting in HttpRequestTracker, Did I need to keep the URL connectable? background: Android Mobile app App had already integrated AppDynamics for years  SDK version is 20.3.0
Hi Folks,   Can anyone help me with encrypting/Masking the aws_key and aws_secret values for multiple inputs stored in inputs.conf as by default these are not encrypted or masked.   Thanks in adv... See more...
Hi Folks,   Can anyone help me with encrypting/Masking the aws_key and aws_secret values for multiple inputs stored in inputs.conf as by default these are not encrypted or masked.   Thanks in advance.
I am running python script and collecting array of Json data into single events. multiple events are clubbed into single events. i want to spilt each json data into new events. i added below props.c... See more...
I am running python script and collecting array of Json data into single events. multiple events are clubbed into single events. i want to spilt each json data into new events. i added below props.conf but its not spiltting the events. @kamlesh_vaghela  {"ErrorCode": 0, "ErrorMessage": null, "Name": "test", "Description": null, "EngineeringUnits": null, "Comment": null, "CollectorName": "BRnjbnTC-Mkjk8_Calculation", "CollectionType": 2} {"ErrorCode": 0, "ErrorMessage": null, "Name": "BR-MSL68.Lmkmnjk26_MIP.P1.ond", "Description": "Lmnnkj26_MlknlkIP..knnlkC01.Second", "EngineeringUnits": null, "Comment": null, "CollectorName": "BRknk-MSLAnk8_OPC_Intelnkjklution_Intkjkellutionkjkjkver", "CollectionType": 2} {"ErrorCode": 0, "ErrorMessage": null, "Name": "BC-MSLA;k;okpoB0168.L26_MnlkjIP.PLC0jnlk1.UDE_SlkjlkIM_TRIlklj;lkGGER", "Description": "L26_Mjklj", "EngineeringUnits": null, "Comment": null, "CollectorName": "BRjkjTC-kljkljlkjik", "CollectionType": 2}     [PsG_SddT_Tags] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=true NO_BINARY_CHECK=true LINE_BREAKER=}(\,){ SEDCMD-break=s/({"ErrorCode": \[)//g SEDCMD-b=s/]}$//g TRUNCATE = 0    
For example, I would like certain rows "ABC" to have less indextime than "DEF". In normal search, "DEF" would have the less indextime as it is being indexed first.
How do I take the results of a search pass a field into a dbxquery and then display results from both the search and the dbxquery? In this example, the search returns data about specific emails wher... See more...
How do I take the results of a search pass a field into a dbxquery and then display results from both the search and the dbxquery? In this example, the search returns data about specific emails where the sender domain is found in the csv file lookup.  I then want to take the Sender email address, use it in the WHERE clause  of the dbquery to return the accoutname field from the db.  Then finally, display various fields in a table including results from the search and the field returned by the dbxquery select.   index=email Act IN ("Acc") Sender!="*mydomain.com" [ | inputlookup susDomains.csv | stats values(domain) AS search | format ] | eval senderEmail=split(Sender,"@") | eval senderDomain=mvindex(senderEmail,1) | eval acctCode=if ((acc="CAU3A274"),"Admin","Manager") | eval SQLRcpt = "'".Rcpt."'" | map search="| dbxquery query=\"SELECT accountname FROM [IDM].[drip].[usr_hist_tbl] uht WHERE uht.mail = $SQLRcpt$ ; \" connection=\"Subscriber_History\"" | table _time,senderDomain,Sender,Rcpt,Subject,acctCode,accoutname,SQLRcpt | sort -_time   This search returns no data when I know there are records that should be returned. Can anyone see where I may have gone wrong? Thanks Leigh
Hello, Does anyone knows how to edit or create new entity dashboard like the screenshot below?   Also, I can add dashboard on the entity, but I can't add auto filter based on entity title ... See more...
Hello, Does anyone knows how to edit or create new entity dashboard like the screenshot below?   Also, I can add dashboard on the entity, but I can't add auto filter based on entity title or alias. When I put $host$ on the search, the dashboard wont load. See below screenshot.   Thanks in advance.      
Hi @alacercogitatus and everyone, Can I ask a question for 「Input Add-On for G Suite」App ? When I use 「Input Add-On for G Suite」 for collecting Google Drive log to Splunk, the error happened and ... See more...
Hi @alacercogitatus and everyone, Can I ask a question for 「Input Add-On for G Suite」App ? When I use 「Input Add-On for G Suite」 for collecting Google Drive log to Splunk, the error happened and input stopped.  ・Which version that I used, error always have. ・The error always happen after 24 hours of Credential created.   Once I created new credential, the input can input google drive data normally, when 24 hours past the error will happen and input will stop.    ・token refresh error in Splunk 7.3 + Input Add-On for G Suite 1.3.1 {"log_level": "ERROR", "timestamp": "Sun, 20 Jun 2021 03:09:38 +0000", "errors": [{"filename": "GoogleAppsForSplunkModularInput.py", "msg": "invalid_grant: reauth related error (invalid_rapt)", "input_name": "ga://Google_Drive_Input_1", "line": 416, "exception_arguments": "invalid_grant: reauth related error (invalid_rapt)", "exception_type": "HttpAccessTokenRefreshError"}], "modular_input_consumption_time": "Sun, 20 Jun 2021 03:09:38 +0000"}   ・token refresh error in Splunk 8.1 + Input Add-On for G Suite 1.4.2 {"timestamp": "Mon, 21 Jun 2021 04:39:34 +0000", "log_level": "ERROR", "errors": [{"msg": "('invalid_grant: reauth related error (invalid_rapt)', '{\\n \"error\": \"invalid_grant\",\\n \"error_description\": \"reauth related error (invalid_rapt)\",\\n \"error_subtype\": \"invalid_rapt\"\\n}')", "exception_type": "<class 'google.auth.exceptions.RefreshError'>", "exception_arguments": "('invalid_grant: reauth related error (invalid_rapt)', '{\\n \"error\": \"invalid_grant\",\\n \"error_description\": \"reauth related error (invalid_rapt)\",\\n \"error_subtype\": \"invalid_rapt\"\\n}')", "filename": "GoogleAppsForSplunkModularInput.py", "line": 490, "input_name": "ga://Google_Drive_Input_1"}], "modular_input_consumption_time": "Mon, 21 Jun 2021 04:39:34 +0000"}   What's happened and how to solve it ?  Thanks a lot.
I'm wondering if the data that is being returned for a federated search is encrypted or not. Also is it possible to encrypt splunk indexes at rest?
Hi, I would like to find out how to calculate the time difference between different events of the same asset ID (group by). My data is structured as such (see below) with no transaction IDs provided:... See more...
Hi, I would like to find out how to calculate the time difference between different events of the same asset ID (group by). My data is structured as such (see below) with no transaction IDs provided:             Asset_ID          _time                                                  Event_Status A001                  2021-01-01 00:00:00                 A A001                  2021-01-01 00:01:00                 B A001                  2021-01-01 00:07:00                 A A002                  2021-01-01 00:01:00                 B A002                  2021-01-01 00:02:00                 C A002                  2021-01-01 00:09:00                 A A002                  2021-01-01 00:11:00                 D A003                  2021-01-01 00:00:00                 B A003                  2021-01-01 00:09:00                 D ... Note: the event statuses can appear in any order time duration needs to be grouped by common asset ID for it to be meaningful It's intended that this be deployed in a live system, so event status values that aren't closed (last value for an asset) are to display the current elapsed time since that event occurred The desired output would be to compute the time duration between rows as such: 1 & 2 for Asset A001: 1min 2 & 3 for Asset A001: 6min 3 for Asset A001: running for X duration (based on current time) since 2021-01-01 00:07:00 4 & 5 for Asset A002: 1min 5 & 6 for Asset A002: 7min 6 & 7 for Asset A002: 2min 7 for Asset A002: running for X duration (based on current time) since 2021-01-01 00:11:00 8 & 9 for Asset A003: 9min 9 for Asset A003: running for X duration (based on current time) since 2021-01-01 00:09:00 ... I've previously tried experimenting using the "transaction" and "duration" functions but they don't seem to give the desired result. Any suggestions on how to resolve this would be greatly appreciated. Thanks.
Hello all, I am at a bit loss in what to do at this point. I had an indexer fail and now that my it is healthy I cannot rejoin the cluster reliably. After a reboot/restarting splunk he will join and... See more...
Hello all, I am at a bit loss in what to do at this point. I had an indexer fail and now that my it is healthy I cannot rejoin the cluster reliably. After a reboot/restarting splunk he will join and begin syncing buckets for about 10 mins. After this he will throw errors and get stuck in a batch adding state on the indexer management page. I get this error on the master: Failed to register with cluster master reason: failed method=POST path=/services/cluster/master/peers/?output_mode=json master=:8089 rv=0 gotConnectionError=0 gotUnexpectedStatusCode=1 actual_response_code=502 expected_response_code=2xx status_line="Bad request" socket_error="No error" remote_error= [ event=ReaddPeer status=retrying AddPeerRequest: { _id= active_bundle_id=9884CA425F0224F22F37BE784337C463 add_type=ReAdd base_generation_id=0 batch_serialno=1 batch_size=4 forwarderdata_rcv_port=9997 forwarderdata_use_ssl=0 last_complete_generation_id=0 latest_bundle_id=9884CA425F0224F22F37BE784337C463 mgmt_port=8089 name=6A1C1358-0C02-4D60-B58B-EA903E3D0991 register_forwarder_address= register_replication_address= register_search_address= replication_port=8080 replication_use_ssl=0 replications= server_name= site=default splunk_version=8.0.7 splunkd_build_number=1c4f3bbe1aea status=Up } ]. The indexer in question displays the same error, along with this one. I’m not sure if it’s related.   ERROR HTTPClientRequest: caught exception while parsing HTTP Reply: string value too long value size = 531110, Maxvaluesize = 524288   i should mention I did reinstall splunk over the currently installed version as part of fixing that indexer.
Hello everyone I hope everyone is having a great day I have been bumping my head against the wall trying to select the smallest positive number from a multi-value field...   I have a SPL that looks... See more...
Hello everyone I hope everyone is having a great day I have been bumping my head against the wall trying to select the smallest positive number from a multi-value field...   I have a SPL that looks like this: | Stats values(numbers) as NUMBERS values(KEY) as KEY by client  But the field NUMBERS may have multiple values per client and I want to select the smallest positive number and the respective KEY associated with it.... For instance I may have something like this Client           NUMBERS.    KEY NATALIE.             -7.               U                                   8.                Y                                    5.                M From the last result I will be interested in having only: Client           NUMBERS.    KEY NATALIE.             5                  M    Does anyone know how to achieve this? I will be so thankful if you guys can help me out thank you guys so much!                                                   
Hello, I have log entries that look like this: 2021-06-21 16:36:14 Error Fix Success for issue submitted by user:14 2021-06-21 16:35:22 Error Found for Users:12,13,14 2021-06-21 16:21:11 Error Fi... See more...
Hello, I have log entries that look like this: 2021-06-21 16:36:14 Error Fix Success for issue submitted by user:14 2021-06-21 16:35:22 Error Found for Users:12,13,14 2021-06-21 16:21:11 Error Fix Success for issue submitted by user:19 2021-06-21 16:20:43 Error Found for Users:15,19,22,23 2021-06-21 16:07:38 Error Fix Success for issue submitted by user:14 2021-06-21 16:05:51 Error Found for Users:12,13,14   I want to be able to get the details (users, submitted_by user, and times) and calculate the durations of errors from when they are found to when they are fixed. I am trying to do this without using transactions. Currently my search finds the duration from 16:05:51 to 16:36:14 because the two sets of events have the same information. How can I rewrite my query (below) to get two different results for the error affecting users 12, 13 and 14? My query: index=INDEX host=HOST sourcetype=SOURCETYPE | rex field=_raw "Error\sFound\sfor\sUsers:(?<users>.+)" | rex field=_raw "Error\sFix\sSuccess\sfor\sissue\ssubmitted\sby\suser:(?<submitted_by_user>\d+)" | where isnotnull(users) or isnotnull(submitted_by_user) | sort 0 +_time -users | filldown submitted_by_user users | sort 0 -_time +users | stats earliest(_raw) as earliest_raw latest(_raw) as latest_raw earliest(_time) as early_time latest(_time) as late_time by users submitted_by_user | eval submitted_by_user=if(like(latest_raw, "%Found%"), "---", submitted_by_user) | eval error_start=strftime(early_time, "%Y-%m-%d %H:%M:%S") | eval error_end=if(submitted_by_user != "---", strftime(late_time, "%Y-%m-%d %H:%M:%S"), "---") | eval duration=if(submitted_by_user != "---", tostring(late_time-early_time, "duration"), "---") | eval users_involved=split(users, ",") | eventstats count(users_involved) as user_count by earliest_raw | fields - early_time late_time | table users_involved, user_count, submitted_by_user, error_start, error_end, duration
How do I search for rogue Server added to my environment including info about the Hacker(s)