All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I have a lookup table where a list of MAC addresses are listed with the associated Vendors; basically an identifier. However, the mac address in this lookup table (column name is 'prefix') ... See more...
Hello,  I have a lookup table where a list of MAC addresses are listed with the associated Vendors; basically an identifier. However, the mac address in this lookup table (column name is 'prefix') only has the three characters - xx:xx:xx. What I'm trying to do is write a query to find devices that were assigned/renewed an IP address from the DHCP server and based on their Mac address information in the result, identify the vendor. I was able to filter the first three characters from the result but when adding the lookup table to enrich the result with the Vendor information, I'm getting zero results. What am I doing wrong here? Thanks in advance!  index=some_dhcp description=renew | eval d_mac=dest_mac | rex field=d_mac "(?P<d_mac>([0-9-Fa-f]{2}[:-]){3})" | lookup vendor.csv Prefix as d_mac OUTPUT Prefix Vendor_Name | search Prefix=* | table date dest_mac Vendor_Name description
Hello, Im using splunk cloud and i have a lot of saved searches - alerts, dashboards, reports that i need to move from one app to another I have lists that map each saved search to the relevant app... See more...
Hello, Im using splunk cloud and i have a lot of saved searches - alerts, dashboards, reports that i need to move from one app to another I have lists that map each saved search to the relevant app Is there  a way to do it with api or any other way that it is not manually one by one ?   Thanks
I have a query that displays avg duration. How to i modify query to alert if avg ( duration) is greater than 1000 last 15 mins.  index=tra cf_space_name="pr" "cf_app_name":"Sch" "msg"."Logging Durat... See more...
I have a query that displays avg duration. How to i modify query to alert if avg ( duration) is greater than 1000 last 15 mins.  index=tra cf_space_name="pr" "cf_app_name":"Sch" "msg"."Logging Duration" AND NOT "DistributedLockProcessor" |rename msg.DurationMs as TimeT |table _time TimeT msg.Service | bucket _time span=1m | stats avg(TimeT) as "Avg" by msg.Service
How do i configure my splunk dashboard to display results from 8AM to the current time by default. I see options for Today or a specific date and timerange, but not a combination of both. ... See more...
How do i configure my splunk dashboard to display results from 8AM to the current time by default. I see options for Today or a specific date and timerange, but not a combination of both.
I'm currently working on integrating Splunk with AWX to monitor Ansible automation jobs. I'm looking for guidance on the best practices for sending AWX job logs to Splunk. Specifically, I'm intereste... See more...
I'm currently working on integrating Splunk with AWX to monitor Ansible automation jobs. I'm looking for guidance on the best practices for sending AWX job logs to Splunk. Specifically, I'm interested in: Any existing plugins or recommended methods for forwarding AWX logs to Splunk. How to differentiate logs from QA and production environments within Splunk. Examples of SPL queries to identify failed jobs or performance metrics. Any advice or resources you could share would be greatly appreciated. Thanks.
Hey all super new to splunk administration - I'm having issues with the bro logs being indexed properly I have 2 days of logs from a folder - but when I go and search the index - despite Indexes sho... See more...
Hey all super new to splunk administration - I'm having issues with the bro logs being indexed properly I have 2 days of logs from a folder - but when I go and search the index - despite Indexes showing millions of events existing, I only see the bro tunnel logs, and they're for the wrong day I'm not even looking to set up all the sourcetypes and extractions at this moment. I just want all of the logs ingested and searchable on the correct day/time.  I've played with the Bro apps and switching the config around in the props.conf.  I've deleted the fishbucket folder to start over and force the re-indexing Overall I feel like there's another step I'm missing.  inputs.conf [monitor://C:\bro\netflow] disabled = false host = MyHost index = bro crcSalt = <SOURCE> 1) why are the tunnel logs being indexed for the wrong day? How do I fix? 2) where are the rest of the logs and how do I troubleshoot? 
Getting error 'Error occurred while trying to authenticate. Please try Again.' while authenticating Salesforce from splunk
Hi All, I want to add entry on first row of my lookup. I know how to append the entry using outputlookup but is there any way to prepend the entry on first row in lookup?
index=abc cf_space_name=prod-ad0000123 cf_app_name IN (RED,Blue,Green) "Initiating " OR "Protobuf message received" OR "Event Qualification Determined" | bucket _time span=1m | stats count(eval(cf_ap... See more...
index=abc cf_space_name=prod-ad0000123 cf_app_name IN (RED,Blue,Green) "Initiating " OR "Protobuf message received" OR "Event Qualification Determined" | bucket _time span=1m | stats count(eval(cf_app_name == "RED)) as RedVolume count(eval(cf_app_name == "blue")) as BlueVolume count(eval(cf_app_name == "Green")) as GreenVolume avg(GreenVolume) as AvgGVolume | eval estimate = (RED + Blue - Green) / AvgGVolume
I'll first insert my whole splunk search query and show whats it showing and whats the expected result           index=sss sourcetype=sss_trade_www_timestamp | rex field=_raw "trade_id=\"(?<tr... See more...
I'll first insert my whole splunk search query and show whats it showing and whats the expected result           index=sss sourcetype=sss_trade_www_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<booking_mq_timestamp>\X+)\", sky_to_mq" | rex field=_raw "mq_latency=\"(?<mq_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sss sourcetype=Sss_Www_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<booking_ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval booking_ep_timestamp = strftime(strptime(booking_ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y/%m/%d %H:%M:%S")] | join type=left sss_id [ search index=sss "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=booking_mq_timestamp "(?<booking_mq_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(booking_ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sss_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sss_id, event_id, booking_timestamp, booking_mq_timestamp, booking_ep_timestamp, mx_status, operation, action, mq_latency, ep_latency, portfolio_name, portfolio_entity | sort booking_ep_timestamp | join type=left sss_id [ search index=sss sourcetype=sss_cashfx_catchup_logs "[Www] - Done incremental update" | rex field=_raw "Max Ssslib TradeID: (?<sss_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S"), "%Y/%m/%d %H:%M:%S") | table sss_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S") | eval www_to_sss_latency = round(catchup_unix_time - booking_timestamp_unix, 0) | eval booking_mq_timestamp_unix = strptime(booking_mq_timestamp, "%Y/%m/%d %H:%M:%S") | eval mq_latency = round(booking_mq_timestamp_unix - booking_timestamp_unix, 0) | eval booking_ep_timestamp_unix = strptime(booking_ep_timestamp, "%Y/%m/%d %H:%M:%S") | eval ep_latency = round(booking_ep_timestamp_unix - booking_mq_timestamp_unix, 0) | eval mq_to_sss_update_latency = round(catchup_unix_time - booking_mq_timestamp_unix, 0) | table trade_id, portfolio_name, portfolio_entity, sss_id, event_id, booking_timestamp, booking_mq_timestamp, booking_ep_timestamp, mq_latency, ep_latency, catchup_updated_time, www_to_sss_latency, mq_to_sss_update_latency, mx_status, operation, action, | dedup sss_id | sort booking_timestamp     It gives me this table but as I cant show all the tables row ill show the relevant ones  trade_id sss_id booking_timestamp booking_mq_timestamp booking_ep_timestamp mq_latency ep_latency catchup_updated_time www_to_sss_latency mq_to_sss_update_latency abc 123 597616519 2024/06/15 09:22:37 2024/06/15 09:24:16 2024/06/15 09:24:16 99 0 2024/06/15 09:24:26 109 10 abc 341 597616518                 abc 218931 597616517                 abc 1201 597614937 2024/06/15 07:50:14 2024/06/15 07:51:12 2024/06/15 07:51:12 58 0 2024/06/15 07:51:19 65 7 abcc 219 597614936                 abc 219 597614935                     just assume the booking_timestamp, booking_mq_timestamp, booking_ep_timestamp, mq_latency, ep_latency are all filled Ok but since my catchup_updated_time is taking from a log entry its populated (eg. 2024-06-15 10:57:03,114 [Www] - Done incremental update. Max SSSSSS TradeID: 597618769), but the rest of the rows/columns are not populated. I want to highlight this specific row since its taking from logs and also fill the empty catchup_updated_time such that 597616518 and 597616517 should take the catchup_updated_time, latency etc of 597616519 as their id is before and 597616519 is the max id taken from logs its row should be highlighted. hence anything before or smaller than 597616519 should have same catchup_updated_time. However not until 597614937 as it already has a catchup_updated_time taken from logs. So same for the rest of the rows. Is this complicateD? Please let me know if you need more info 
First Splunk query gives me a value in a table. The value is a jobId. I want to use this jobId in another search query like a second one. Can we join them in Splunk way? index=myindex cs2k_transact... See more...
First Splunk query gives me a value in a table. The value is a jobId. I want to use this jobId in another search query like a second one. Can we join them in Splunk way? index=myindex cs2k_transaction_id_in_error="CHG063339403031900 major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | table jobId index=myindex "TTY"  "jobId"
Hello, I have a case where I need to do regex  and I built my regex using regex101, everything works great and catchs everything there But I encountred an issue where splunk won't accept optional gr... See more...
Hello, I have a case where I need to do regex  and I built my regex using regex101, everything works great and catchs everything there But I encountred an issue where splunk won't accept optional groups "(\\\")?", it'll give the error of unmatched closing parenthesis until you add another closing bracket like so: "(\\\"))?" And another issue I encountred is after I add this closing bracket, the regex will work, but not consistently. Here's what I mean: That's a part of my regex:     \[\{(\\)?\"PhoneNumber(\\)?\":(\\)?\"(?<my_PhoneNumber>[^\\\"]+     Won't work until I add more brackets to the optional groups like I mentioned before:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+       second issue: adding another part will still work:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+)\S+OtherPhoneNumber(\\))?\":(\\))?(\"))?(?<myother_PhoneNumber>[^,\\\"]+|null)       Adding a third part with the exact same format as the second part won't, will give the error of unmatched closing parenthesis again:     \[\{(\\))?\"PhoneNumber(\\))?\":(\\))?\"(?<my_PhoneNumber>[^\\\"]+)\S+OtherPhoneNumber(\\))?\":(\\))?(\"))?(?<myother_PhoneNumber>[^,\\\"]+|null)\S+Email(\\))?\":(\\))?(\"))?(?<email>[^,\\\"]+|null)       Am I missing something? I know the regex itself works   Sample data of the original log:   [{"PhoneNumber":"+1 450555338","AlternativePhoneNumber":null,"Email":null,"VoiceOnlyPhoneNumber":null}]   [{\"PhoneNumber\":\"+20 425554005\",\"AlternativePhoneNumber\":\"+1 455255697\",\"Email\":\"Dam@test.com.us\",\"VoiceOnlyPhoneNumber\":null}]"}   [{\"PhoneNumber\":\"+1 459551561\",\"AlternativePhoneNumber\":\"+1 6155555533\",\"Email\":null,\"VoiceOnlyPhoneNumber\":\"+1 455556868\"}]
I created a Python script that successfully links episodes with my 3rd party ticketing system. I'm trying to populate that ticket system with some of the "common field" values associated with a given... See more...
I created a Python script that successfully links episodes with my 3rd party ticketing system. I'm trying to populate that ticket system with some of the "common field" values associated with a given episode but I don't see a good way to do that?  Anyone have any hints on how to accomplish this? I'm probably missing something very obvious in the documentation.   thx!
I have 2 different splunk apps, one is a TA and the other is an app.  TA : uses modular input to connect with a data source. There are some logs and metadata that are pulled from the data source. Lo... See more...
I have 2 different splunk apps, one is a TA and the other is an app.  TA : uses modular input to connect with a data source. There are some logs and metadata that are pulled from the data source. Logs are pulled via syslog by providing a tcp input and metadata via api key and secret. The metadata is stored in kv stores.  App: is supposed to be installed on search heads and they support dashboards/reports that make use of the logs and metadata sent by HF. For splunk enterprise, the above approach works when HF has the context of search heads, because HF takes care of uploading the kv stores to the search heads via scheduled search. This ensures that the app residing on SH has the data to work with. However, on splunk cloud, once TA is installed , how to ensure that SH nodes have metadata to work with? Can we find out what are the search head fqdns so that kv stores can be copied there via scheduled search?
I have this question as a reference: Splunk Question  I have one indexer, SH, and one forwarder. At some point, I had sent data from the Forwarder to the indexer and it was searchable from the SH. A... See more...
I have this question as a reference: Splunk Question  I have one indexer, SH, and one forwarder. At some point, I had sent data from the Forwarder to the indexer and it was searchable from the SH. After a few runs, I had received this error:      Search not executed: The minimum free disk space (5000MB) reached for /export/opt/splunk/var/run/splunk/dispatch. user=admin., concurrency_category="historical", concurrency_context="user_instance-wide", current_concurrency=0, concurrency_limit=5000     This was because our root filesystem was only 20 gigs and after a few searches it had dropped below 5 gigs. So at this point i was wanting to remount and move around a few things and move splunk into a larger FS. So i did, and since then, I have fully removed and reinstalled splunk and remounted it multiple times on each server, and some of these issues persist. Currently, The search Head isnt wanting to connect and search the indexer. Storage and server resources are fine, and i can connect through telnet over port 8089, but the replication keeps failing.  I keep receiving error:     Bundle replication to peer named dc1nix2dxxx at uri https://dc1nix2dxxx:8089 was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_DATA_TRANSMIT_FAILURE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information. Expected common latest bundle version on all peers after sync replication, found none. Reverting to old behavior - using most recent bundles on all     I can connect and do everything else on the server through the backend, and i can telnet between the two, so im not sure what to do. Most everything i keep seeing has me check settings under distributed environment. And most of the time, under those setting, it says due to our license we arent allowed to access those settings. All i have set is one search peer, dc1nix2pxxx:8089.   Some sources say its an issue with the web.conf settings, but i dont have a web.conf under my local, and if i did, what should that look like? I just have three servers im working with.  Id appreciate any help or guidance in the right direction.    Thank you
I have been working on our Splunk Dev environment, and since then, I have reinstalled and uninstalled Splunk many times. I had a question as to why even on a fresh install, the apps, and a few other ... See more...
I have been working on our Splunk Dev environment, and since then, I have reinstalled and uninstalled Splunk many times. I had a question as to why even on a fresh install, the apps, and a few other artifacts remain? Once i wipe all traces of splunk off a server, I would think that upon reinstall, it would be a fresh start. yet, some of the GUI settings remain, and even some apps on the specific servers remain.  I have one dev indexer, SH, and Forwarder. We have specific apps that i have installed for people months ago, and since then, have rm -rf all traces that I could find of splunk, and yet, upon reinstall of splunk, I still see those apps under /SPLUNK_HOME/etc/apps. I have the same tar that i am unzipping on each server. yet, things like that persist across the servers.    My question is, what is storing that info? For example, the app BeyondTrust-PMCloud-Integration/, located under /export/opt/splunk/etc/apps, persists throughout two or three reinstalls of splunk. Is the FS storing data about the Splunk install even after i rm -rf all of /export/opt/splunk?  Im trying to fix some annoying issues for replication and such by just resetting the servers, since i am building them from ground up, but these servers are still retaining some stuff. I decided to redo Splunk dev after we kept having issues with the old Dev environment. I was wanting a completely fresh start, but it seems as if Splunk retains some things even after a full reset. So im not sure if some problems are still persisting because something from a previous install is still floating around somewhere. Thanks for any help
Per the Container automation API docs , "the update API is supported from within a custom function". However for the following code, the "Validate" fails with "Undefined variable 'container' " updat... See more...
Per the Container automation API docs , "the update API is supported from within a custom function". However for the following code, the "Validate" fails with "Undefined variable 'container' " update_data = {} update_data['name'] = 'new container name' phantom.update(container, update_data) What is the fix?
Does anyone know how to reach Splunk Sales in the US? Is there a new process to reach them, now that they're part of Cisco?  I've been trying to reach them for over a month. I've submitted contact f... See more...
Does anyone know how to reach Splunk Sales in the US? Is there a new process to reach them, now that they're part of Cisco?  I've been trying to reach them for over a month. I've submitted contact forms on the website. No response. Not even an automated response. Calls to 1 866.GET.SPLUNK goes to voice mail. Thanks.    
I have a logfile like this -   2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 ... See more...
I have a logfile like this -   2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc6285603725604350160.tmp&name=Dittmar%20-%20NO%20Contents%20-%20%20company%20Application%20(Please%20Sign)%20-%20signed&contentCreator=ALEXANDER BLANCO&mimeType=application/pdf&accountNum=09631604&policyNum=12980920&jobIdentifier=34070053 2024-06-14 09:34:45,505 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] Uploading file to pc from \\myloc\CoreTmp\app\pc\in\gwpc628560372560435 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp&name=wagnac%20%20slide%20coverage%20b&description=20% rule&contentCreator=JOSEY FALCON&mimeType=application/pdf&accountNum=09693720&policyNum=13068616 2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] The Upload Service /repo/service/company/upload failed in 0.000000 seconds, null 2024-06-13 09:22:49,103 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-43] Exception from executeScript: 051333149 Failed to execute web script. org.springframework.extensions.webscripts.WebScriptException: 051333149 Failed to execute web script. at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:105) at org.repo.repo.web.scripts.RepositoryContainer.lambda$transactionedExecute$2(RepositoryContainer.java:556) at org.repo.repo.transaction.RetryingTransactionHelper.doInTransaction(RetryingTransactionHelper.java:450) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecute(RepositoryContainer.java:539) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:663) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:699) ... 23 more Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - Error at index 0 in: " r" at java.base/java.net.URLDecoder.decode(URLDecoder.java:232) at java.base/java.net.URLDecoder.decode(URLDecoder.java:142) at com.mysite.core.repo.util.RepositoryUtils.decodeValue(RepositoryUtils.java:465) at com.mysite.core.repo.BaseWebScript.getParameterMap(BaseWebScript.java:138) at com.mysite.core.repo.upload.FileUploadWebScript.executeImpl(FileUploadWebScript.java:37) at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:75) ... 47 more 2024-06-13 09:22:49,124 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-53] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/search Query String: center=cc&docId=a854dbad-af6e-43e3-af73-8ac66365e000   Now there are multiple log entries so we need to first check for the presence of this error "Illegal hex characters in escape (%) pattern". Then looking at the SessionID... in this case - [http-nio-8080-exec-43] but there can be lot of other and may be duplicate SessionID in the log, check the line starting with "Query String" with the same or close timestamp (HH:MM) and create a report like this -   AccountNumnber PolicyNumber Name Location 09693720 13068616 wagnac%20%20slide%20coverage%20b \\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp   As you can see there are two entries in the logfile for the same SessionID http-nio-8080-exec-43 but we want record only for the entry where we got 1. Error "Illegal hex characters in escape" and 2. Entry originated at 2024-06-13 09:22. We can compare _time too as request event and the error event can have difference in time. So, it will be better to search and compare with the timestamp strftime(_time, "%Y-%m-%d %H:%M"). This wau it will compare with Date, Hr, and Min. BTW we might have same error with same SessionID in the log but it has to be different timestamp. So, it is very important to Chek for time also but with the formatted one. I created one Splunk report. Inner and Outer query are able to provide results separately but when I merge and run, although it looking at the required events but not returning any data in the table -   index=myindex "Illegal hex characters in escape (%) pattern" | rex field=_raw "\[http-nio-\d+-exec-(?<sessionID>\d+)\]" | eval outer_timestamp=strftime(_time, "%Y-%m-%d %H:%M") | table outer_timestamp, sessionID | join type=inner sessionID [ search index=index "Query String" AND "myloc" AND "center=pc" | rex field=_raw "\[http-nio-\d+-exec-(?<sessionID>\d+)\]" | rex "accountNum=(?<AccountNum>\d+)" | rex "policyNum=(?<PolicyNum>\d+)" | rex "name=(?<Name>[^&]+)" | rex "description=(?<Description>[^&]+)" | rex "location=(?<Location>[^&]+)" | eval inner_timestamp=strftime(_time, "%Y-%m-%d %H:%M") | table sessionID, AccountNum, PolicyNum, Name, Description, Location, inner_timestamp ] | where outer_timestamp = inner_timestamp | table outer_timestamp, sessionID, AccountNum, PolicyNum, Name, Description, Location   What can be the issue? How can I get the desired result? Thanks!
I have 2 records for PaymentType as send and receive. I would like to extract PaymentType as receive only so that I can further compare. Could you please let me know how do I can extract PaymentType... See more...
I have 2 records for PaymentType as send and receive. I would like to extract PaymentType as receive only so that I can further compare. Could you please let me know how do I can extract PaymentType as receive only. transaction: {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"receive","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"} transaction: {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"send","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"}