All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your question would be much easier to understand if you skip the complex SPL, first give sample data (anonymize as needed), illustrate desired output, then explain the logic between illustrated data ... See more...
Your question would be much easier to understand if you skip the complex SPL, first give sample data (anonymize as needed), illustrate desired output, then explain the logic between illustrated data sample and desired output without SPL.  To diagnose your attempted SPL, you also illustrate the actual output from that SPL, and explain how actual output is different from desired output if that is not painfully obvious. (Remember: What is "obvious" to you is not always obvious to volunteers who lack intimate knowledge about your dataset and your use case.) As a side note, the illustrated SPL implies that your sourcetype=sss_trade_www_timestamp contains snippets like "trade_id=foo",  "mx_status=bar", and so on.  If so, Splunk would have extracted trade_id, status, etc. without your rex.  Is there any reason Splunk is not giving you those?
I realize that now() function does not give 13 digits of epoch date time and only 10 digits length where as my other two fields viz. eventStartsFrom and eventEndsAt are having 13 digits. eventStart... See more...
I realize that now() function does not give 13 digits of epoch date time and only 10 digits length where as my other two fields viz. eventStartsFrom and eventEndsAt are having 13 digits. eventStartsFrom = 1718394600000 now = 1718432273 eventEndsAt= 1718740200000 You mean the two extracted fields are not epoch time, but epoch time expressed in milliseconds.  Generally, it's a better idea to bring data to match now() so semantics is clearer.  But considering that multiplication is more efficient than division, doing the opposite is perhaps better.  I suggest to name now() * 1000 more semantically expressive, such as now_ms instead of calling it something "date".  This helps future maintenance.
I realize that now() function does not give 13 digits of epoch date time and only 10 digits length where as my other two fields viz. eventStartsFrom and eventEndsAt are having 13 digits. eventStarts... See more...
I realize that now() function does not give 13 digits of epoch date time and only 10 digits length where as my other two fields viz. eventStartsFrom and eventEndsAt are having 13 digits. eventStartsFrom = 1718394600000 now = 1718432273 eventEndsAt= 1718740200000 Hence, I multiplied now() by 1000 and then wrote this query below index=shared_data source="lambda:maintenance_window_handler" sourcetype="httpevent" | eval nowdate = (now() * 1000 )| eval diffBeginDates = (nowdate - eventStartsFrom) | eval diffEndDates = (eventEndsAt - nowdate) | where diffBeginDates > 0 and diffEndDates > 0   After this the query behaved as intended. Thanks all for the help. (This thread can be closed now.)    
Thanks for your time and help. I am posting my solution down in thread. Your suggestion of posting datasets I will take care in my future posts. though so as it is easy to get help. 
There can be several ways to do this.  Transaction is not the most efficient, but in this case, I want to use its maxspan feature because your "same or close timestamp" is very difficult to quantify.... See more...
There can be several ways to do this.  Transaction is not the most efficient, but in this case, I want to use its maxspan feature because your "same or close timestamp" is very difficult to quantify.  The command is actually very simple after you reconstruct the data developers and error handlers put in there.   | rex "(\S+ +\S+) +(?<log_level>\S+) +\[(?<class>[^\[]+)\] +\[(?<threadId>[^\]]+)" | rex "Query String: (?<query_string>.+)" | rex "Service Path: (?<service_path>.+)" | rex "The .+ Service (?<service_path>\S+)" | rex "Caused by: (?<cause_exception>\S+): +(?<cause_error>.+)" | transaction threadId startswith="log_level=INFO" endswith="log_level=ERROR" maxspan=1s | where match(cause_error, "Illegal hex characters in escape") | table accountNum policyNum name location   Your sample data would give accountNum policyNum name location 09693720 13068616 wagnac%20%20slide%20coverage%20b \myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp Here is data emulation you can play with and compare with real data   | makeresults | eval data = mvappend("2024-06-14 09:34:45,504 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc6285603725604350160.tmp&name=Dittmar%20-%20NO%20Contents%20-%20%20company%20Application%20(Please%20Sign)%20-%20signed&contentCreator=ALEXANDER BLANCO&mimeType=application/pdf&accountNum=09631604&policyNum=12980920&jobIdentifier=34070053 2024-06-14 09:34:45,505 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] Uploading file to pc from \\myloc\CoreTmp\app\pc\in\gwpc628560372560435", "2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-43] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/upload Query String: center=pc&contentType=reqDocExt&location=\\myloc\CoreTmp\app\pc\in\gwpc5799838158526007183.tmp&name=wagnac%20%20slide%20coverage%20b&description=20% rule&contentCreator=JOSEY FALCON&mimeType=application/pdf&accountNum=09693720&policyNum=13068616", "2024-06-13 09:22:49,101 INFO [com.mysite.core.repo.upload.FileUploadWebScript] [http-nio-8080-exec-43] The Upload Service /repo/service/company/upload failed in 0.000000 seconds, null", "2024-06-13 09:22:49,103 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-43] Exception from executeScript: 051333149 Failed to execute web script. org.springframework.extensions.webscripts.WebScriptException: 051333149 Failed to execute web script. at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:105) at org.repo.repo.web.scripts.RepositoryContainer.lambda$transactionedExecute$2(RepositoryContainer.java:556) at org.repo.repo.transaction.RetryingTransactionHelper.doInTransaction(RetryingTransactionHelper.java:450) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecute(RepositoryContainer.java:539) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:663) at org.repo.repo.web.scripts.RepositoryContainer.transactionedExecuteAs(RepositoryContainer.java:699) ... 23 more Caused by: java.lang.IllegalArgumentException: URLDecoder: Illegal hex characters in escape (%) pattern - Error at index 0 in: \" r\" at java.base/java.net.URLDecoder.decode(URLDecoder.java:232) at java.base/java.net.URLDecoder.decode(URLDecoder.java:142) at com.mysite.core.repo.util.RepositoryUtils.decodeValue(RepositoryUtils.java:465) at com.mysite.core.repo.BaseWebScript.getParameterMap(BaseWebScript.java:138) at com.mysite.core.repo.upload.FileUploadWebScript.executeImpl(FileUploadWebScript.java:37) at com.mysite.core.repo.BaseWebScript.execute(BaseWebScript.java:75) ... 47 more", "2024-06-13 09:22:49,124 INFO [com.mysite.core.repo.BaseWebScript] [http-nio-8080-exec-53] ****** NEW WEBSCRIPT REQUEST ****** Server Path: http://repo.mysite.com:80 Service Path: /repo/service/company/search Query String: center=cc&docId=a854dbad-af6e-43e3-af73-8ac66365e000") | mvexpand data | rename data AS _raw | extract | rex "(?<_time>\S+ +\S+)" | eval _time = strptime(_time, "%F %T.%N") | sort - _time ``` data emulation above ```  
index=abc cf_space_name=prod-ad0000123 cf_app_name IN (RED,Blue,Green) "Initiating " OR "Protobuf message received" OR "Event Qualification Determined" | bucket _time span=1m | stats count(eval(cf_ap... See more...
index=abc cf_space_name=prod-ad0000123 cf_app_name IN (RED,Blue,Green) "Initiating " OR "Protobuf message received" OR "Event Qualification Determined" | bucket _time span=1m | stats count(eval(cf_app_name == "RED)) as RedVolume count(eval(cf_app_name == "blue")) as BlueVolume count(eval(cf_app_name == "Green")) as GreenVolume avg(GreenVolume) as AvgGVolume | eval estimate = (RED + Blue - Green) / AvgGVolume
I'll first insert my whole splunk search query and show whats it showing and whats the expected result           index=sss sourcetype=sss_trade_www_timestamp | rex field=_raw "trade_id=\"(?<tr... See more...
I'll first insert my whole splunk search query and show whats it showing and whats the expected result           index=sss sourcetype=sss_trade_www_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<booking_mq_timestamp>\X+)\", sky_to_mq" | rex field=_raw "mq_latency=\"(?<mq_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sss sourcetype=Sss_Www_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<booking_ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval booking_ep_timestamp = strftime(strptime(booking_ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y/%m/%d %H:%M:%S")] | join type=left sss_id [ search index=sss "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=booking_mq_timestamp "(?<booking_mq_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(booking_ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sss_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sss_id, event_id, booking_timestamp, booking_mq_timestamp, booking_ep_timestamp, mx_status, operation, action, mq_latency, ep_latency, portfolio_name, portfolio_entity | sort booking_ep_timestamp | join type=left sss_id [ search index=sss sourcetype=sss_cashfx_catchup_logs "[Www] - Done incremental update" | rex field=_raw "Max Ssslib TradeID: (?<sss_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S"), "%Y/%m/%d %H:%M:%S") | table sss_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S") | eval www_to_sss_latency = round(catchup_unix_time - booking_timestamp_unix, 0) | eval booking_mq_timestamp_unix = strptime(booking_mq_timestamp, "%Y/%m/%d %H:%M:%S") | eval mq_latency = round(booking_mq_timestamp_unix - booking_timestamp_unix, 0) | eval booking_ep_timestamp_unix = strptime(booking_ep_timestamp, "%Y/%m/%d %H:%M:%S") | eval ep_latency = round(booking_ep_timestamp_unix - booking_mq_timestamp_unix, 0) | eval mq_to_sss_update_latency = round(catchup_unix_time - booking_mq_timestamp_unix, 0) | table trade_id, portfolio_name, portfolio_entity, sss_id, event_id, booking_timestamp, booking_mq_timestamp, booking_ep_timestamp, mq_latency, ep_latency, catchup_updated_time, www_to_sss_latency, mq_to_sss_update_latency, mx_status, operation, action, | dedup sss_id | sort booking_timestamp     It gives me this table but as I cant show all the tables row ill show the relevant ones  trade_id sss_id booking_timestamp booking_mq_timestamp booking_ep_timestamp mq_latency ep_latency catchup_updated_time www_to_sss_latency mq_to_sss_update_latency abc 123 597616519 2024/06/15 09:22:37 2024/06/15 09:24:16 2024/06/15 09:24:16 99 0 2024/06/15 09:24:26 109 10 abc 341 597616518                 abc 218931 597616517                 abc 1201 597614937 2024/06/15 07:50:14 2024/06/15 07:51:12 2024/06/15 07:51:12 58 0 2024/06/15 07:51:19 65 7 abcc 219 597614936                 abc 219 597614935                     just assume the booking_timestamp, booking_mq_timestamp, booking_ep_timestamp, mq_latency, ep_latency are all filled Ok but since my catchup_updated_time is taking from a log entry its populated (eg. 2024-06-15 10:57:03,114 [Www] - Done incremental update. Max SSSSSS TradeID: 597618769), but the rest of the rows/columns are not populated. I want to highlight this specific row since its taking from logs and also fill the empty catchup_updated_time such that 597616518 and 597616517 should take the catchup_updated_time, latency etc of 597616519 as their id is before and 597616519 is the max id taken from logs its row should be highlighted. hence anything before or smaller than 597616519 should have same catchup_updated_time. However not until 597614937 as it already has a catchup_updated_time taken from logs. So same for the rest of the rows. Is this complicateD? Please let me know if you need more info 
def change_event_name(container=None, **kwargs): """ Args: container Returns a JSON-serializable object that implements the configured data paths: """ ############################ Custom C... See more...
def change_event_name(container=None, **kwargs): """ Args: container Returns a JSON-serializable object that implements the configured data paths: """ ############################ Custom Code Goes Below This Line ################################# import json import phantom.rules as phantom outputs = {} # Write your custom code here... update_data = {} update_data['name'] = 'new container name' phantom.update(container, update_data) # Return a JSON-serializable object assert json.dumps(outputs) # Will raise an exception if the :outputs: object is not JSON-serializable return outputs
Everything Splunk knows about itself is in $SPLUNK_HOME (/export/opt/splunk, in this case).  Once that directory is wiped, there will be no remnants of Splunk software on the system.  Indexed data ma... See more...
Everything Splunk knows about itself is in $SPLUNK_HOME (/export/opt/splunk, in this case).  Once that directory is wiped, there will be no remnants of Splunk software on the system.  Indexed data may remain, especially if $SPLUNK_DB is in a different mount point (as recommended). Before re-installing Splunk, did you confirm the app directories are gone?  Have you looked to see if they're part of the tarball you're expanding?
I tried passing the container class object as an input (item or list type) and not passing as an input also, but it does not work either way. The entire custom function with passing a container class... See more...
I tried passing the container class object as an input (item or list type) and not passing as an input also, but it does not work either way. The entire custom function with passing a container class object input is below. The error from debugging the playbook is below. Since the only custom function input types are item or list, it appears that it is not possible to pass a class object type as a custom function input. If so, I would guess that an unknown phantom function needs to be executed in the custom function that returns the container class object. Does anyone if a phantom class object function (or some other Splunk SOAR Python library function) exists that returns the container class object? Or some other way to get the phantom.update() function to work within a custom function? def change_event_name(**kwargs): """ Returns a JSON-serializable object that implements the configured data paths: """ ############################ Custom Code Goes Below This Line ################################# import json import phantom.rules as phantom outputs = {} # Write your custom code here... update_data = {} update_data['name'] = 'new container name' phantom.update(container, update_data) # Return a JSON-serializable object assert json.dumps(outputs) # Will raise an exception if the :outputs: object is not JSON-serializable return outputs   Jun 14, 19:04:13 : CustomFunctionRun with id=4043 FAILED: The custom function run is being marked failed because all of its constituent results failed Error: Encountered an unhandled exception in custom function "change_event_name" for the parameter dictionary at index=0: {'container': 'container'} Traceback (most recent call last): File "change_event_name", line 56, in cfentry File "lib3/phantom/decided/playbook_resource_score.py/playbook_resource_score.py", line 123, in _wrapper File "change_event_name", line 21, in change_event_name File "lib3/phantom/api/container/api_update.py/api_update.py", line 118, in update File "lib3/phantom/utils.py/utils.py", line 1166, in inner File "lib3/phantom/api/container/api_update.py/api_update.py", line 125, in _update TypeError: string indices must be integers
the solution is here: https://docs.splunk.com/Documentation/Splunk/9.0.1/Knowledge/Addfieldmatchingrulestoyourlookupconfiguration
As @gcusello points out, the data you illustrated is suspiciously close to JSON.  Are you sure that your data is not like this instead?   {"transaction": {"version":1,"status":"approved","identifie... See more...
As @gcusello points out, the data you illustrated is suspiciously close to JSON.  Are you sure that your data is not like this instead?   {"transaction": {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"receive","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"} }   Or is it possible that you are simply illustrating an extracted field named transaction whose values are   {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"receive","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"}   and   {"version":1,"status":"approved","identifier":"0c4240e0-2c2c-6427-fb1f-71131029cd89","amount":"[REDACTED]","transactionAmount":"[REDACTED]","timestamp":"2024-06-13T04:29:20.673+0000","statusChangedTimestamp":"2024-06-13T04:29:56.337+0000","type":"payment","transferIdentifier":"cded3395-38f9-4258-90a5-9269abfa5536","currencyCode":"USD","PaymentType":"send","senderHandle":"[REDACTED]","recipientHandle":"[REDACTED]","fees":[],"transferMode":"contact"}   If not, your developers are really doing a deservice to everyone downstream, not just Splunkers.  But if raw data  is indeed as you originally posted, you can first extract the valid JSON into a field, let's call it transaction, then extract key-value pairs from this object.   | rex "transaction: *(?<transaction>{.+)" | fromjson transaction   This is what you should get PaymentType amount currencyCode fees identifier recipientHandle senderHandle status statusChangedTimestamp timestamp transactionAmount transferIdentifier transferMode type version receive [REDACTED] USD   0c4240e0-2c2c-6427-fb1f-71131029cd89 [REDACTED] [REDACTED] approved 2024-06-13T04:29:56.337+0000 2024-06-13T04:29:20.673+0000 [REDACTED] cded3395-38f9-4258-90a5-9269abfa5536 contact payment 1 send [REDACTED] USD   0c4240e0-2c2c-6427-fb1f-71131029cd89 [REDACTED] [REDACTED] approved 2024-06-13T04:29:56.337+0000 2024-06-13T04:29:20.673+0000 [REDACTED] cded3395-38f9-4258-90a5-9269abfa5536 contact payment 1 Here is an emulation you can play with and compare with real data   | makeresults | eval data = mvappend("transaction: {\"version\":1,\"status\":\"approved\",\"identifier\":\"0c4240e0-2c2c-6427-fb1f-71131029cd89\",\"amount\":\"[REDACTED]\",\"transactionAmount\":\"[REDACTED]\",\"timestamp\":\"2024-06-13T04:29:20.673+0000\",\"statusChangedTimestamp\":\"2024-06-13T04:29:56.337+0000\",\"type\":\"payment\",\"transferIdentifier\":\"cded3395-38f9-4258-90a5-9269abfa5536\",\"currencyCode\":\"USD\",\"PaymentType\":\"receive\",\"senderHandle\":\"[REDACTED]\",\"recipientHandle\":\"[REDACTED]\",\"fees\":[],\"transferMode\":\"contact\"}", "transaction: {\"version\":1,\"status\":\"approved\",\"identifier\":\"0c4240e0-2c2c-6427-fb1f-71131029cd89\",\"amount\":\"[REDACTED]\",\"transactionAmount\":\"[REDACTED]\",\"timestamp\":\"2024-06-13T04:29:20.673+0000\",\"statusChangedTimestamp\":\"2024-06-13T04:29:56.337+0000\",\"type\":\"payment\",\"transferIdentifier\":\"cded3395-38f9-4258-90a5-9269abfa5536\",\"currencyCode\":\"USD\",\"PaymentType\":\"send\",\"senderHandle\":\"[REDACTED]\",\"recipientHandle\":\"[REDACTED]\",\"fees\":[],\"transferMode\":\"contact\"}") | mvexpand data | rename data AS _raw ``` data emulation above ```  
Hi @Abass42 , the answer is easy: add more disk space to both your servers. On Indexers you must have the disk space for data, but also the disk space for the bundle replication. On SHs you must h... See more...
Hi @Abass42 , the answer is easy: add more disk space to both your servers. On Indexers you must have the disk space for data, but also the disk space for the bundle replication. On SHs you must have the disk space for apps and for dispatches. How muche disk space did you allocated on both the servers? If this is a lab you could save space applying a limited retention on data. Ciao. Giuseppe
Hi @rdhdr , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Ah. I suspect this is more about the rex expression than the table. You could try a join: index=myindex TTY | rex field=_raw "Job Id: (?<jobId>.*?)\." | join left=L right=R where L.jobId=R.jobId... See more...
Ah. I suspect this is more about the rex expression than the table. You could try a join: index=myindex TTY | rex field=_raw "Job Id: (?<jobId>.*?)\." | join left=L right=R where L.jobId=R.jobId [search index=myindex cs2k_transaction_id_in_error="CHG063339403031900" major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | table jobId ]  
Thank you for your comment, I posted sample data in the original post and I will try your offer
So, there is two ways to do this CAC authentication.   SAML or LDAP trusted methods.  Before, I thought PKI was just one option but, SAML open up another option. I hope this helps:  Configure single... See more...
So, there is two ways to do this CAC authentication.   SAML or LDAP trusted methods.  Before, I thought PKI was just one option but, SAML open up another option. I hope this helps:  Configure single sign-on with SAML - Splunk Documentation
Thanks for the reply, yes, I have tried that already. It does not work. The response (jobId) is in a table so that wont allow this subsearch.
Would it be possible to post some sample data. It's a bit too easy to get lost in what is supposed to be an escape character versus a character in your data. Please replace any real phone numbers wit... See more...
Would it be possible to post some sample data. It's a bit too easy to get lost in what is supposed to be an escape character versus a character in your data. Please replace any real phone numbers with dummy values.  Escaping backslashes for regex expressions is always fun, but I suspect that's where your issues are coming from. Escaping a backslash in a regex from the search box requires four backslashes as there are two layers of escaping that are happening.  I try to construct regexs to avoid that: | makeresults | eval phone_data="[{\"PhoneNumber\":\"123-456-7890\"}]" | append [ | makeresults | eval phone_data="[{\\\"PhoneNumber\\\":\\\"111-111-1111\\\"}]" ] | rex field=phone_data "PhoneNumber[^\d]+(?<my_PhoneNumber>[0-9-\(\)]+)" but if I'm making an incorrect assumption about the characters in aphone number, you can try | rex field=phone_data "PhoneNumber[^\d]+(?<my_PhoneNumber>[^\\\\\"]+)"  
Have you tried a subsearch? index=myindex "TTY" [ search index=myindex cs2k_transaction_id_in_error="CHG063339403031900 major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | ... See more...
Have you tried a subsearch? index=myindex "TTY" [ search index=myindex cs2k_transaction_id_in_error="CHG063339403031900 major_code="ERROR" | rex field=_raw "Job Id: (?<jobId>.*?)\." | table jobId ]