All Topics

Top

All Topics

I have a Splunk Cloud instance where we send logs from servers with the Universal Forwarder installed. All UF are managed by a Deployment Server. My questión is: what are the best practices on howw ... See more...
I have a Splunk Cloud instance where we send logs from servers with the Universal Forwarder installed. All UF are managed by a Deployment Server. My questión is: what are the best practices on howw to organize apps, both Splunkbase downloaded and in-house built and also configuration-only apps, if they are a best practice? Right now we are experimenting with deploying the Splunkbase apps as they are (easier to update them) and deploying the configuration in an extra app named starting with numbers so its configuration takes precedence. But we have run into some issues in the past with this approach.
Hi All, I have a requirement to Onboard Data from a website like http://1.1.1.1:1234/status/v2 and its a vendor managed API url so Application team cannot use the HEC Token option. so I have prepar... See more...
Hi All, I have a requirement to Onboard Data from a website like http://1.1.1.1:1234/status/v2 and its a vendor managed API url so Application team cannot use the HEC Token option. so I have prepared the script to get the Data and tested it Locally and the script works as expected. I have created a forwarder app with bin folder and kept the script in that and pushed the App to one of our Integration Forwarder but unable to get any data in Splunk. I have tested the connectivity between our IF and the URL and its successful( Did a Curl to that URL and able to see the URL content) I have checked firewall and permissions , all seems to be ok but still I am unable to get data in splunk. Also I checked internal index but don't find anything there. Can someone guide me what else I need to check in order to get this fixed. Below is my inputs: [monitor://./bin/abc.sh] index=xyz disabled=false interval = 500 sourcetype=script:abc source=abc.sh I have also created props as below: [script:abc DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = true 
Hello Experts,   This is a long searches, explored query that I am getting a way around. If we do a simple query like this     index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILE... See more...
Hello Experts,   This is a long searches, explored query that I am getting a way around. If we do a simple query like this     index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode | fields country, ProductCode, type, Failed_Count, Passed_Count, Total     This above simple query gives me a result table where the total belongs to the specific country and productCode i.e. individual Total Now there is this field 'errorinfo' -  what I want is that I want to show the 'errorinfo' if its "codeerror"  as well in the above list like this   index=zzzzzz | stats count as Total, count(eval(txnStatus="FAILED")) as "Failed_Count", count(eval(txnStatus="SUCCEEDED")) as "Passed_Count" by country, type, ProductCode, errorinfo | fields country, ProductCode, type, Failed_Count, Passed_Count, errorinfo, Total   This table shows results like this below country ProductCode type Failed_Count Passed_Count errorinfo Total usa 111 1c 4 0 wrong code value 4 usa 111 1c 6 0 wrong field selected 6 usa 111 1c 0 60 NA 70   How can I do so that I can see the results like this where Total remains the complete total  of field txnStatus (FAILED+SUCCEEDED) like below table - If I can achieve this I can do % total as well, if you see the Total belongs to one country - usa total shows usa total and canada total shows can total   country ProductCode type Failed_Count errorinfo Total usa 111 1c 4 wrong code value 70 usa 111 1c 6 wrong field selected 70 can 222 1b 2 wrong entry 50 can 222 1b 6 code not found 50 country ProductCode type Failed_Count errorinfo Total usa 111 1c 4 wrong code value 70 usa 111 1c 6 wrong field selected 70     Thanks in advance Nishant
hello, i have a correlation search with variable that does'nt work | stats count by host | eval hello_world = host when im looking in incident review, my alerte show $hello_word$ and not my value... See more...
hello, i have a correlation search with variable that does'nt work | stats count by host | eval hello_world = host when im looking in incident review, my alerte show $hello_word$ and not my values host. Can you help me please ? splunk ver 7.3.5
Hi All, The below 10 Error Records have, Last 3 Error records need not ingested, the above 7 error records data only be ingested.How do we write the Regular Expression ,please guide me.     2023-... See more...
Hi All, The below 10 Error Records have, Last 3 Error records need not ingested, the above 7 error records data only be ingested.How do we write the Regular Expression ,please guide me.     2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: No package class found with name: PRD-QDB35801A 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: (INVALID_DATA) Invalid value [V5100000003P211] specified for parameter [package_class__c] : Object record ID does not resolve to a valid active [package_class__c] 2023-11-06 15:30:48,941 ERROR https-jsse-nio-8443-exec-9 com.veeva.brp.batchrecordprint.BatchRecordPrintController - PRINT ERROR: Print failure response 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - Unknown error: {errorType=GENERAL, responseStatus=EXCEPTION, responseMessage=502 Bad Gateway} 2023-11-06 15:30:48,941 ERROR https-jsse-nio-8443-exec-2 com.veeva.brp.batchrecordprint.BatchRecordPrintController - (API_LIMIT_EXCEEDED) You have exceeded the maximum number of authentication API calls allowed in a [1] minute period. 2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - PCI ERROR: No package class found with name: PR01-PU3227V1MSPS 0001 2023-11-08 06:19:49,539 ERROR https-jsse-nio-8443-exec-1 com.veeva.brp.batchrecordprint.BatchRecordPrintController - DOCLIFECYCLE ERROR: Error initiating lifecycle action for document: 5742459, Version: 0.1 2023-10-25 10:56:46,710 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - Header Field Name: bom_uom_1_c, value:E3HR5teHlfOQjzUJ74jTdKh1Tu0yajHqT/H98klZOyU= 2023-10-25 10:56:46,711 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_Added_1, value is out of Bounds using beginIndex:770, endIndex:771 from line: 2023-10-25 10:56:46,711 ERROR pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks
Hi, I would like to ask a question regarding the lookups table. I am managing logs about login and I want to be sure that on a specific host you can access only with a specific IP address, otherwise... See more...
Hi, I would like to ask a question regarding the lookups table. I am managing logs about login and I want to be sure that on a specific host you can access only with a specific IP address, otherwise alert is triggered. So basically I have a lookup built like this IP HOST 1.1.1.1 host1 2.2.2.2 host2 3.3.3.3 host3 My purpose is to build a query search that finds whenever the IP-HOST association is not respected. 1.1.1.1 connects to host1 ---> OK 1.1.1.1 connects to host2 ---> BAD 2.2.2.2 connects to host1 ---> BAD The connection from host1 should arrive only from 1.1.1.1, etc.. How can I text this query?  Thank you
Hi all, I'm trying to configure SSL certificate for management port 8089 on Manager Node and Indexers. In file $SPLUNK_HOME/etc/system/local/server.conf in Manager Node and Indexers.   [sslConfig... See more...
Hi all, I'm trying to configure SSL certificate for management port 8089 on Manager Node and Indexers. In file $SPLUNK_HOME/etc/system/local/server.conf in Manager Node and Indexers.   [sslConfig] sslRootCAPath = <path_to_rootCA> sslPassword = mycertpass enableSplunkdSSL = true serverCert = <path_to_manager_or_indexer_cert> requireClientCert = true sslAltNameToCheck = manage-node.example.com   I check rootCA and my server certificate in Manager Node and Indexers with `openssl verify` and it return OK. I use the same certificate for Indexers and one for Manager Node. All my certificate have purpose is SSL server and SSL client: X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication But when I set `requireClientCert = true`, it return "unsupported certificate" error and I can't access to Splunk Web of Manager Node. Please help me to fix this! 
Hi, I'm looking Security Use case on Salesforce application. Request to suggest if any please. Regards BT
I need a python file/ function to be triggered while deleting a input/ configuration
Hi at all, I have a data flow in json format from one host that I ingest with HEC, so I have one host, one source and one sourcetype for all events. I would override the host, source and sourcetype... See more...
Hi at all, I have a data flow in json format from one host that I ingest with HEC, so I have one host, one source and one sourcetype for all events. I would override the host, source and sourcetype values based on regexes and I'm able to do this. The issue is that the data flow is an elaboration of an external systel (logstash) that takes raw logs (e.g. from linux systems) and saves them in a fields of the json format ("message") adding many other fields. So, after host, source and sourcetype overriding (that is fine working) I would remove all the extra contents in the events and maintain only the content of the message field (the raw logs). I'm able to do this, but the issue is that I'm not able to do both the transformations: in other words I'm able to override values but the extra contents removing doesn't work or I can remove extra contents but the overriding doesn't work. I have in my props. conf the following configurations: [logstash] # set host TRANSFORMS-sethost = set_hostname_logstash # set sourcetype Linux TRANSFORMS-setsourcetype_linux_audit = set_sourcetype_logstash_linux_audit # set source TRANSFORMS-setsource = set_source_logstash_linux # restoring original raw log [linux_audit] SEDCMD-raw_data_linux_audit = s/.*\"message\":\"([^\"]+).*/\1/g as you can see in the first stanza I override sourcetype from logstash to linux_audit and in the second I try to remove the extra contents using the linux audit sourcetype. If I use the logstash sourcetype also in the second stanza, the extra contents are removed, but the fields overriding (that runs using the extra contents) doesn't work. I also tried to setup a priority using the props.conf "priority" option with no luck. I also tried to use source for the first stanza because source usually has an higher priority than sourcetype, but with the same result. Can anyone give me an hint how to solve this issue? Thank you in advance. Ciao. Giuseppe
Hi, I'd like to ask about the version of Splunk TA "Palo Alto Networks App for Splunk" (Splunk_TA_paloalto). Our Palotlo machines will be replaced and the PAN OS change from 9.1 to 10.2.4. What is... See more...
Hi, I'd like to ask about the version of Splunk TA "Palo Alto Networks App for Splunk" (Splunk_TA_paloalto). Our Palotlo machines will be replaced and the PAN OS change from 9.1 to 10.2.4. What is the appropriate version of TA for PAN OS 10.2.4? Our "Splunk_TA_paloalto" is now 7.1.0.   Thanks  in advance.
I am currently integrating Splunk SOAR with Forcepoint Web Security. I am testing out the connectivity but getting error that SSL:UNSUPPORTED PROTOCOL. Forcepoint currently support up till TLS 1.1 ... See more...
I am currently integrating Splunk SOAR with Forcepoint Web Security. I am testing out the connectivity but getting error that SSL:UNSUPPORTED PROTOCOL. Forcepoint currently support up till TLS 1.1 anyway I can set/modify for SOAR/forcepoint to utilize 1.1 in the meantime instead of 1.2?
Hi all, I have facing  an issue where exactly we can troubleshoot when a Host Stops Sending cmd Logs to Splunk.   Thanks 
Synthetic testing provides proactive email and text notification before end-user performance is affected   CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 2 ... See more...
Synthetic testing provides proactive email and text notification before end-user performance is affected   CONTENTS | Introduction | Video |Resources | About the presenter  Video Length: 2 min 46 seconds  E-commerce DevOps teams can use AppDynamics to monitor the health and performance of their applications, receiving alerts on issues before significant business impact. Devs can use AppDynamics to provide automatic email and text notifications about such issues. In this demo, see how you can harness a custom email alert notification to view anomalous synthetic transaction events within the Browser App Dashboard, drill into the Heath Rules violation page, and link to a custom dashboard to troubleshoot an unexpected increase in synthetic end-user response time for shopping cart activity.   Additional Resources  Learn more about these related Cisco AppDynamics topics in the documentation:  Configure and Manage Alerting Templates  End User Monitoring: Browser App Dashboard   Alert and Respond: Troubleshoot Health Rule Violations  Custom Dashboards Overview: Custom Dashboards   Synthetic Monitoring Overview: Synthetic Monitoring  About presenter John Helter John Helter, Senior Sales Engineer John joined Cisco AppDynamics in October of 2021 as a Federal Public Sector Solutions Engineer. Despite the opportunity being quite a collection of "firsts" (first Sales based role, first job in the "Tech Industry," and first-time changing jobs during a global pandemic), John is thrilled he took a chance and considers himself extremely fortunate to be an AppDynamo-Cisconian and to work day in and day out with such an amazing team! Primarily supporting current and potential U.S. Department of Defense customers, John is focused on the AppD On-Prem product (which is actually a self-hosted solution that can be deployed in “The Cloud,” on virtualized machines and/or within physical on-premises environments). Feel free to reach out to him if you have any AppDynamics-related on-prem questions, need deployment support, or to share as many “dad-jokes” as humanly possible!
I am trying to create a pie chart of success vs. failure with stats command with the following: search | stats c(assigned_user) AS Success c(authorization_failure_user) AS Failed
I've seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data. I have the following json: { "Record": { "contentId": "429636", "levelId": "57... See more...
I've seen a few of the spath topics around, but wasn't able to understand enough to make it work for my data. I have the following json: { "Record": { "contentId": "429636", "levelId": "57", "levelGuid": "3c5b481a-6698-49f5-8111-e43bb7604486", "moduleId": "83", "parentId": "0", "Field": [ { "id": "22811", "guid": "6c6bbe96-deab-46ab-b83b-461364a204e0", "type": "1", "_value": "Need This with 22811 as the field name" }, { "id": "22810", "guid": "08f66941-8f2f-42ce-87ae-7bec95bb5d3b", "type": "1", "p": "need this with 22810 as the field name" }, { "id": "478", "guid": "4e17baea-f624-4d1a-9c8c-83dd18448689", "type": "1", "p": [ "Needs to have 478 as field name", "Needs to have 478 as field name" ] }, { "id": "22859", "guid": "f45d3578-100e-44aa-b3d3-1526aa080742", "type": "3", "xmlConvertedValue": "2023-06-16T00:00:00Z", "_value": "needs 22859 as field name" }, { "id": "482", "guid": "a7ae0730-508b-4545-8cdc-fb68fc2e985a", "type": "3", "xmlConvertedValue": "2023-08-22T00:00:00Z", "_value": "needs 482 as field name" }, { "id": "22791", "guid": "89fb3582-c325-4bc9-812e-0d25e319bc52", "type": "4", "ListValues": { "ListValue": { "id": "74192", "displayName": "Exception Closed", "_value": "needs 22791 as field name" } } }, { "id": "22818", "guid": "e2388e72-cace-42e6-9364-4f936df1b7f4", "type": "4", "ListValues": { "ListValue": { "id": "74414", "displayName": "Yes", "_value": "needs 22818 as field name" } } }, { "id": "22981", "guid": "8f8df6e3-8fb8-478b-8aa0-0be02bec24e3", "type": "4", "ListValues": { "ListValue": { "id": "74550", "displayName": "Critical", "_value": "needs 22981 as field name" } } }, { "id": "22876", "guid": "4cc725ad-d78d-4fc0-a3b2-c2805da8f29a", "type": "9", "Reference": { "id": "256681", "_value": "needs 22876 as field name" } }, { "id": "23445", "guid": "f4f262f7-290a-4ffc-af2b-dcccde673dba", "type": "9", "Reference": { "id": "255761", "_value": "needs 23445 as field name" } }, { "id": "1675", "guid": "ea8f9a24-3d35-49f9-b74e-e3b9e48f8b3b", "type": "2" }, { "id": "22812", "guid": "e563eb9e-6390-406a-ac79-386e1c3006a3", "type": "2", "_value": "needs 22812 as field name" }, { "id": "22863", "guid": "a9fe7505-5877-4bdf-aa28-9f6c86af90ae", "type": "8", "Users": { "User": { "id": "5117", "firstName": "data", "middleName": "data", "lastName": "data", "_value": "needs 22863 as field name" } } }, { "id": "22784", "guid": "4466fd31-3ab3-4117-8aa0-40f765d20c10", "type": "3", "xmlConvertedValue": "2023-07-18T00:00:00Z", "_value": "7/18/2023" }, { "id": "22786", "guid": "d1c7af3e-a350-4e59-9353-132a04a73641", "type": "1" }, { "id": "2808", "guid": "4392ae76-9ee1-45bf-ac31-9e323a518622", "type": "1", "p": "needs 2808 as field name" }, { "id": "22802", "guid": "ad7d4268-e386-441d-90b1-2da2fba0d002", "type": "1", "table": { "style": "width: 954px", "border": "1", "cellspacing": "0", "cellpadding": "0", "tbody": { "tr": { "style": "height: 73.05pt", "td": { "style": "width: 715.5pt", "valign": "top", "p": "needs 22802 as field name" } } } } }, { "id": "8031", "guid": "fbcfdf2c-2990-41d1-9139-8a1d255688b0", "type": "1", "table": { "style": "width: 954px", "border": "1", "cellspacing": "0", "cellpadding": "0", "tbody": { "tr": { "style": "height: 71.1pt", "td": { "style": "width: 715.5pt", "valign": "top", "p": [ "needs 8031 as field name", "needs 8031 as field name" ] } } } } }, { "id": "22820", "guid": "0f98830d-48b3-497c-b965-55be276037f2", "type": "1", "p": "needs 22820 as field name" }, { "id": "22807", "guid": "8aa0d0fa-632d-4dfa-9867-b0cc407fa96b", "type": "3" }, { "id": "22855", "guid": "e55cbc59-ad8d-4831-8e6f-d350046026e9", "type": "1" }, { "id": "8032", "guid": "f916365b-e6eb-4ab9-a4ff-c7812a404854", "type": "1", "p": "needs 8032 as field name" }, { "id": "22792", "guid": "8e70c28a-2eec-4e38-b78b-5495c2854b3e", "type": "1", "_value": "needs 22792 as field name " }, { "id": 22793, "guid": "ffeaa385-643a-4f04-8a00-c28ddd026b7f", "type": "4", "ListValues": "" }, { "id": "22795", "guid": "c46eac60-d86e-4af4-9292-d194a601f8b6", "type": "1" }, { "id": "22797", "guid": "8cd6e398-e565-4034-8db8-2e2ecb2f0b31", "type": "4", "ListValues": { "ListValue": { "id": "73060", "displayName": "data", "_value": "needs 22797 as field name" } } }, { "id": "22799", "guid": "20823b18-cb9b-47a3-854d-58f874164b27", "type": "4", "ListValues": { "ListValue": { "id": "74410", "displayName": "Other", "_value": "needs 22799 as field name" } } }, { "id": "22798", "guid": "5b32be4c-bc40-45b3-add4-1b22162fd882", "type": "4", "ListValues": { "ListValue": { "id": "74405", "displayName": "N/A", "_value": "needs 22798 as field name" } } }, { "id": "22800", "guid": "6b020db0-780f-4eaf-8381-c122425b71ed", "type": "1", "p": "needs 22800 as field name" }, { "id": "22801", "guid": "06334da8-5392-4a9d-a3eb-d4075ee30787", "type": "1", "p": "needs 22801 as field name" }, { "id": "22794", "guid": "25da1de8-8e81-4281-8ef3-d82d1dc005ad", "type": "4", "ListValues": { "ListValue": { "id": "74398", "displayName": "Yes", "_value": "needs 22794 as field name" } } }, { "id": "22813", "guid": "89760b4f-49be-40ad-8429-89c247e3e95a", "type": "1", "p": "needs 22813 as field name" }, { "id": "22803", "guid": "03b6c826-e15c-4356-89e8-b0bd509aaeb5", "type": "3", "xmlConvertedValue": "2023-06-15T00:00:00Z", "_value": "needs 22803 as field name" }, { "id": "22804", "guid": "d7683f9c-97bb-461a-97df-36ec6596b4fc", "type": "1", "p": "needs 22804 as field name" }, { "id": "22805", "guid": "33386a3a-c331-4d8c-9825-166c0a5235c2", "type": "3", "xmlConvertedValue": "2023-06-15T00:00:00Z", "_value": "needs 22805 as field name" }, { "id": "22806", "guid": "cd486293-9857-475c-9da3-a06f836edb59", "type": "1", "p": "needs 22806 as field name" } ] } } and have been able to extract id, (some) p data and _value data from Record.Field{} using: | spath path=Record.Field{} output=Field | mvexpand Field | spath input=Field | rename id AS Field_id, value AS Field_value, p AS Field_p , but have been unable get any other data out. The p values that I can get out are single value only. In particular, I need to get the multi-value fields for ListValues{}.ListValue out. In addition, I need to map the values in _value and p to the top ID field in that array. I think the code sample provided above explains what's needed. I know I can do a |eval {id}=value but it's complicated when there are so many more fields other than value, or complicated when the fields are nested. Can someone help with this?
Hello, How do I give same rank for same score? Student d and e has the same score of 73, thus they both Rank 4, but student f has Rank 6. Rank 5 is skipped because Student d and e has the same scor... See more...
Hello, How do I give same rank for same score? Student d and e has the same score of 73, thus they both Rank 4, but student f has Rank 6. Rank 5 is skipped because Student d and e has the same score.  Thank you for your help Expected result: Student Score Rank a 100 1 b 95 2 c 84 3 d 73 4 e 73 4 f 54 6 g 43 7 h 37 8 i 22 9 j 12 10   This is what I figured out so far, but i won't take into consideration of same Score     | makeresults format=csv data="Student, Score a,100 b,95 c,84 d,73 e,73 f,54 g,43 h,37 i,22 j,12" | streamstats count      
Hello! This is probably a simple question but I've been kind of struggling with it. I'm building out my first playbook which triggers off of new artifacts. The artifacts include fields for: type, val... See more...
Hello! This is probably a simple question but I've been kind of struggling with it. I'm building out my first playbook which triggers off of new artifacts. The artifacts include fields for: type, value, tag. What I'm trying to do is have those fields from the artifact passed directly into a custom code block in my playbook. How do I go about accessing those fields? I've tried using phantom.collect2(container=container, datapath=["artifact:FIELD_NAME*"]) in the code block but it doesn't return anything. I thought maybe I needed to setup custom fields to define type, value and tag in the custom fields settings, but that didn't change anything either. Any help would be appreciated, thank you!
I am looking to extract some information from a Values field that has two values within it.  How can i specify which one of the values I need in a search as the two values is meant to be "read" ... See more...
I am looking to extract some information from a Values field that has two values within it.  How can i specify which one of the values I need in a search as the two values is meant to be "read" and "written"? This is my current search right now and I think it is including both values together. index="collectd_test" plugin=disk type=disk_octets plugin_instance=$plugin_instance1$ | stats min(value) as min max(value) as max avg(value) as avg | eval min=round(min, 2) | eval max=round(max, 2) | eval avg=round(avg, 2)
All, Leveraging the following article (https://community.splunk.com/t5/Other-Usage/How-to-export-reports-using-the-REST-API/m-p/640406/highlight/false#M475) I was able to successfully manipulate the... See more...
All, Leveraging the following article (https://community.splunk.com/t5/Other-Usage/How-to-export-reports-using-the-REST-API/m-p/640406/highlight/false#M475) I was able to successfully manipulate the script to: 1. Run using an API token (as opposed to credentials). 2. Get it to run a search I am interested in returning data from. I am however running into an error with my search (shown below).   <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Unparsable URI-encoded request data</msg> </messages> </response>    The script itself now looks like this (I have removed the token and obscured the Splunk endpoint for obvious reasons.   #!/bin/bash # A simple bash script example of how to get notable events details from REST API # EXECUTE search and retrieve SID SID=$(curl -H "Authorization: Bearer <token ID here>" -k https://host.domain.com:8089/services/search/jobs -d search=" search index=index sourcetype="sourcetype" source="source" [ search index="index" sourcetype="sourcetype" source="source" deleted_at="null" | rename uuid AS host_uuid | stats count by host_uuid | fields host_uuid ] | rename data.id AS Data_ID host_uuid AS Host_ID port AS Network_Port | mvexpand data.xrefs{}.type | strcat Host_ID : Data_ID : Network_Port Custom_ID_1 | strcat Host_ID : Data_ID Custom_ID_2 | stats latest(*) as * by Custom_ID_1 | search state!="fixed" | search category!="informational" | eval unixtime=strptime(first_found,"%Y-%m-%dT%H:%M:%S")" <removed some of the search for brevity> \ | grep "sid" | awk -F\> '{print $2}' | awk -F\< '{print $1}') echo "SID=${SID}" Omitted the remaining portion of the script for brevity....     It is at this point shown in brackets (| eval unixtime=strptime(first_found,"%Y-%m-%dT%H:%M:%S") that I am getting the error in question. The search returns fine up to the point where I am converting time ---- I tried escaping using "\", but that did not seem to help. I am sure I am missing something simple and looking for some help.