All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I created a Splunk_TA_nix/local/inputs.conf. I created 2 different indexes in indexes.conf.  And then I created an inputs.conf and monitored 2 directories with 2 different indexes with same sourcety... See more...
I created a Splunk_TA_nix/local/inputs.conf. I created 2 different indexes in indexes.conf.  And then I created an inputs.conf and monitored 2 directories with 2 different indexes with same sourcetype. I put files in the directories. But the 2nd index doesn't get data in.   Help please
Hi,   I am interested in assigning a field value for priority which will be determined by the name of the saved search.   Is it possible to pull the name of the saved search from in the saved sea... See more...
Hi,   I am interested in assigning a field value for priority which will be determined by the name of the saved search.   Is it possible to pull the name of the saved search from in the saved search? 
We are validating our Splunk 6.1.1 ES installation and have noticed the "asset_lookup_by_cidr" kvstore based lookup data is not being populated.  Looks like ES 6.1.1 now runs a python script module i... See more...
We are validating our Splunk 6.1.1 ES installation and have noticed the "asset_lookup_by_cidr" kvstore based lookup data is not being populated.  Looks like ES 6.1.1 now runs a python script module in a input process to extract the data from our assets file then into the kvstore for further processing.  It's not working and i am struggling to figure out how to troubleshoot the the python modular approach to this extraction.  Any idea where I can look for issues?  Here are some of the items I have already checked. 1.  Our asset data does include the ip field with entries containing subnet masks.  Like 127.0.0.1/32 . 2. Running the original 5.x correlation query which used to populate the "asset_lookup_by_cidr" table produces results.  This leads me to believe the data is in good shape. 3. A review of the _internal logs is not showing any python scripting errors from the modules that I have noticed. Thank you, Ken    
Hello, since almost 1 year we've sending alerts through an business email in office365. But since the last week this messages cannot be send because of following error:   5.2.0 STOREDRV.Submission.... See more...
Hello, since almost 1 year we've sending alerts through an business email in office365. But since the last week this messages cannot be send because of following error:   5.2.0 STOREDRV.Submission.Exception:SendAsDeniedException.MapiExceptionSendAsDenied; Failed to process message due to a permanent exception with message Cannot submit message.   We tried with others emails and it works correctly, but with the office365 email not. Is there any limitation between Splunk and the office365 SMTP? What could probably be the problem here? Any thoughts? Thanks.
Hello Team ,  i need to set up alert when to condition meets i should get alert. 1st condition (string) - BEA-000337 2nd condition Started time is greater than 6000 ms could you please help
I'm running the below query across the network and would like it to pinpoint that search towards two users rather than run the search across the entire network and have hundreds of users.  When I use... See more...
I'm running the below query across the network and would like it to pinpoint that search towards two users rather than run the search across the entire network and have hundreds of users.  When I use the "where" clause and enter in both users that I'd like to search under, nothing comes back.  The moment I remove the "where" clause, data comes back.  Should my where clause positioned elsewhere up the pipe?  Or is it just a matter of incorrect syntax? | from datamodel:"Authenticate" | eval host=upper(host) | search [| inputlookup hosts_upper.csv] | search user!=svc* NOT src_user IN (system, dbagent, svc*) | search NOT signature_id IN (4769, 4672, 4648) | eval "Logon Type" = case(Logon_Type == 2, "Interactive", Logon_Type == 3, "Network", Logon_Type == 4, "Batch", Logon_Type == 5, "Service", Logon_Type == 7, "Unlock", Logon_Type == 8, "Clear Text", Logon_Type == 9, "New Credentials", Logon_Type == 10, "Remote Interactive", Logon_Type == 11, "Cached Interactive") | fillnull value=NULL "Unknown" | table _time action host user src app Logon_Type "Logon Type" host signature sourcetype | where user="johndoe" OR user="janesmith" | sort -_time
i need script in SPL to show when there is an idle forwarder or if a forwarder isn't forwarding
I'm working on a project to set Sql Server DB monitoring in Splunk. I'm creating custom store procedures and then fetch data into splunk. while most of them are working fine, I'm running into "invali... See more...
I'm working on a project to set Sql Server DB monitoring in Splunk. I'm creating custom store procedures and then fetch data into splunk. while most of them are working fine, I'm running into "invalid object" issue if the store procedure is using temp tables.  Store procedure script: USE DBName GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE schemaname.[serverconfig] AS BEGIN SET NOCOUNT ON CREATE TABLE #CPUValues ( [index] SMALLINT ,[description] VARCHAR(128) ,[server_cores] SMALLINT ,[value] VARCHAR(5) ) CREATE TABLE #MemoryValues ( [index] SMALLINT ,[description] VARCHAR(128) ,[server_memory] DECIMAL(10, 2) ,[value] VARCHAR(64) ) INSERT INTO #CPUValues EXEC xp_msver 'ProcessorCount' INSERT INTO #MemoryValues EXEC xp_msver 'PhysicalMemory' SELECT convert(VARCHAR(50), cast(getutcdate() AS DATETIMEOFFSET(3)), 127) ExecutionTime ,cast(SERVERPROPERTY('SERVERNAME') as varchar(100)) AS 'instance' ,cast(v.sql_version as varchar(100)) as sql_version ,cast(( SELECT SUBSTRING(CONVERT(VARCHAR(255), SERVERPROPERTY('EDITION')), 0, CHARINDEX('Edition', CONVERT(VARCHAR(255), SERVERPROPERTY('EDITION')))) + 'Edition' ) as varchar(max))AS sql_edition ,cast( SERVERPROPERTY('ProductLevel') as varchar(max)) AS 'service_pack_level' ,cast( SERVERPROPERTY('ProductVersion') as varchar(max)) AS 'build_number' ,cast(( SELECT DISTINCT local_tcp_port FROM sys.dm_exec_connections WHERE session_id = @@SPID ) as varchar(max)) AS [port] , cast(( SELECT cast([value] AS INT) FROM sys.configurations WHERE name LIKE '%min server memory%' ) as varchar(max)) AS min_server_memory , cast(( SELECT cast([value] AS INT) FROM sys.configurations WHERE name LIKE '%max server memory%' ) as varchar(max)) AS max_server_memory ,cast(( SELECT ROUND(CONVERT(DECIMAL(10, 2), server_memory / 1024.0), 1) FROM #MemoryValues ) as varchar(max)) AS server_memory ,cast(server_cores as varchar(max)) as server_cores ,cast(( SELECT COUNT(*) AS 'sql_cores' FROM sys.dm_os_schedulers WHERE STATUS = 'VISIBLE ONLINE' ) as varchar(max)) AS sql_cores ,cast(( SELECT cast([value] AS INT) FROM sys.configurations WHERE name LIKE '%degree of parallelism%' ) as varchar(max)) AS max_dop ,cast(( SELECT cast([value] AS INT) FROM sys.configurations WHERE name LIKE '%cost threshold for parallelism%' ) as varchar(max)) AS cost_threshold_for_parallelism FROM #CPUValues LEFT JOIN ( SELECT CASE WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '8%' THEN 'SQL Server 2000' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '9%' THEN 'SQL Server 2005' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '10.0%' THEN 'SQL Server 2008' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '10.5%' THEN 'SQL Server 2008 R2' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '11%' THEN 'SQL Server 2012' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '12%' THEN 'SQL Server 2014' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '13%' THEN 'SQL Server 2016' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '14%' THEN 'SQL Server 2017' WHEN CONVERT(VARCHAR(128), SERVERPROPERTY('PRODUCTVERSION')) LIKE '15%' THEN 'SQL Server 2019' ELSE 'UNKNOWN' END AS sql_version ) AS v ON 1 = 1 DROP TABLE #CPUValues DROP TABLE #MemoryValues SET NOCOUNT OFF END GO Here's the error message I'm getting: 2020-06-22 12:15:11.101 -0400 [dw-53 - POST /api/inputs] ERROR io.dropwizard.jersey.errors.LoggingExceptionMapper - Error handling a request: ad4b202ff32bca30 java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: A processing error "Invalid object name '#CPUValues'." occurred. at com.splunk.dbx.server.util.ResultSetMetaDataUtil.isTableHavingSameNameColumns(ResultSetMetaDataUtil.java:145) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.create(InputServiceImpl.java:139) at com.splunk.dbx.server.api.service.conf.impl.InputServiceImpl.create(InputServiceImpl.java:38) at com.splunk.dbx.server.api.resource.InputResource.createInput(InputResource.java:96) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at ... I can execute the sp in SQL Editor successfully but when I move on to next step it will fail due to invalid object. Things I've tried so far: 1. I've tested to replace the temp table to acutal table, 2. define dbname, schemaname and tablename 3. use double quote around all db,schename and table names, 4. Use brakets around db, schema, and table names 5. move all tables to default dbo schema, it was in a different schema 6. tried with 3 different versions of Java drivers, default 4.2, 7.4.1 and 8.2.2 7. Tried with same but simplified sp to contain just one temp table and no duplicate column names in the store procedure. All above have failed for the same error, any help or suggestion is highly appreciated. Thank you.
We posted a blog recently with Spring updates from Splunk Developer and wanted to share it here, too where we can discuss! Already working on the Summer update so let us know if there's anything you'... See more...
We posted a blog recently with Spring updates from Splunk Developer and wanted to share it here, too where we can discuss! Already working on the Summer update so let us know if there's anything you'd like to see in that upcoming newsletter. https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-developer-spring-2020-update.html   Is it the middle of Spring already? (And Fall, for our Southern Hemisphere readers!) A lot has happened at Splunk since our last developer updates at .conf19. In case you missed any of the great developer sessions there, grab some time to watch what you missed!  Python 2 to Python 3 Migration With the new year came the EOL of Python 2, and while runtimes for both Python 2 and Python 3 were included in Splunk Enterprise 8.0 and Splunk Cloud 8.0, the July update of Splunk Cloud 8.0.x will be the last to include the Python 2 runtime, and the next major or minor release of Splunk Enterprise 8.0 after July 1 will no longer include the Python 2 runtime. Read the “Migrating your Splunkbase App and Users to Python 3 and Splunk 8.0” blog for more information and the Python 3 migration documentation. Splunkbase Updates Beginning in July, the Splunkbase App Archive Policy will be updated to align with the Splunk Support Policy and the currently supported versions of Splunk Enterprise. Apps in Splunkbase that have not been updated within two years, or are not compatible with a currently-supported version of Splunk Enterprise, will be archived. App authors will still receive pre-archive notification one month in advance. To prevent an app from being archived, just submit an update that is compatible with a supported release of Splunk Enterprise. Also starting in June, Product Compatibility options for Splunk Enterprise will no longer have Splunk Enterprise 7.0 and older release options to select when submitting an app, updating an app, or searching for an app. SimData 1.1 Are you testing your application with data? Instead of using a sample set of data that is repetitive and unrealistic, SimData generates a rich and robust set of events from real-world situations by mimicking how multiple systems work together and affect the performance of your system. Download SimData v1.1.0 here and try the examples here. Visual Studio Code Splunk Extension Writing an extension to Splunk in Python? You could be using Microsoft Visual Studio Code to speed your development. Visual Studio Code is a free, cross-platform, highly rated code editor from Microsoft that provides a rich development environment including debugging capabilities such as breakpoints, stepping into code, variable inspection, and displaying the call stack. Visual Studio Code is extensible, and the Splunk integration takes advantage of the extensibility to provide intelligence about Splunk .conf files and interact with Splunk via the editor. Read more here. More New Releases and Updates Did you see the release of Java Logging Libraries v1.8? This update includes feature improvements and bug fixes to address memory leaks, decrease memory usage, and stop data loss! Splunk Enterprise Python SDK v1.6.13 added Bearer token support using Splunk Token in Splunk Enterprise v7.3 in v1.6.12 and a bug fix in v1.6.13. Want to extend the Splunk Enterprise REST API with custom endpoints? The docs were recently updated so you can add functionality to your app that Splunk Enterprise doesn’t support natively or to manage your app’s custom configuration files. The updated AppInspect CLI and API v2.1.0 add a lot of great new checks for Python 3 and testing against Splunk 8.0.2 for improved measurement of readiness with Splunk Cloud.  Splunk Supported Versions Notification Keep up to date with the latest supported versions of Splunk products on the Splunk Support Policy page. Here you’ll find the dates when releases of Splunk products are no longer supported. End of support for Splunk Enterprise 7.1 is extended to October 31, 2020 (unless otherwise noted on the Support Policy page).
I am facing issues while installing Splunk Enterprise splunk-8.0.4.1-ab7a85abaa98-x64-release where it is rolling back again.  Please suggest if there is any pre-requisites before installing Splunk E... See more...
I am facing issues while installing Splunk Enterprise splunk-8.0.4.1-ab7a85abaa98-x64-release where it is rolling back again.  Please suggest if there is any pre-requisites before installing Splunk Enterprise. Do I need to install Python SDK? Error Message:  Error Message:::: This appears to be your first time running this version of Splunk. 23:39:47 C:\Windows\system32\cmd.exe /c ""C:\Program Files\Splunk\bin\splunk.exe" _internal first-time-run --answer-yes --no-prompt >> "C:\Users\SONA-S~1\AppData\Local\Temp\splunk.log" 2>&1" Traceback (most recent call last): File "C:\Program Files\Splunk\Python-3.7\Lib\site-packages\splunk\clilib\cli.py", line 25, in <module> import splunk.clilib.control_api as ca File "C:\Program Files\Splunk\Python-3.7\lib\site-packages\splunk\clilib\control_api.py", line 23, in <module> import splunk.clilib.i18n as i18n ModuleNotFoundError: No module named 'splunk.clilib.i18n'
I have this search that actual display all the data I want, but I need to add one more column to display the user full name, so far I get this display here is the query: sourcetype=MSExchange:*:Mes... See more...
I have this search that actual display all the data I want, but I need to add one more column to display the user full name, so far I get this display here is the query: sourcetype=MSExchange:*:MessageTracking source_id=SMTP (event_id=RECEIVE) user_bunit=Energy (recipient_domain="x.com" OR recipient_domain="x.com") | stats count as RECEIVE by recipient | append [search sourcetype=MSExchange:*:MessageTracking source_id=SMTP (event_id=SEND) user_bunit=Energy (recipient_domain="x.com" OR recipient_domain="x.com") |stats count as SEND by recipient] | append [search sourcetype=MSExchange:*:MessageTracking user_bunit=Energy tag=delivery (recipient_domain="x.com" OR recipient_domain="x.com") | stats count as delivery by recipient] |lookup EnergyAD.csv src_user_nick as src_user_nick | stats values(SEND) as SEND,values(RECEIVE) as RECEIVE, values(delivery) as delivery, values(src_user_nick) as src_user_nick by recipient | rename recipient as "Email Account" SEND as "Outbound Messages" RECEIVE as "Inbound Messages" delivery as "Internal Messages" displays this: it does not show anything under src_user_nick (which is the user full name)  
Hi, Need one Help. I have the below mentioned requestPath where I am able to capture the whole path..But can't take a count if there is different Div ID and Account No. I just to extract only "/ecp... See more...
Hi, Need one Help. I have the below mentioned requestPath where I am able to capture the whole path..But can't take a count if there is different Div ID and Account No. I just to extract only "/ecp/stream/v1/purchase/" and make a count of it. so that I can get the different response status "requestPath":"/ecp/stream/v1/purchase/NTX.8160/8260180902213447","responseStatus":204,"responseSize":0,"responseContent":"","responseTime":27} Show syntax highlighted rex field=_raw "requestPath":"(?<reqPath>[^?|^\s|^"]+)"
I have a dashboard which counts the number of times a user performed an action.  I have 3 time frames (last 24h, 7d, 30d) and thought I would try using  three base searches since I do more transforma... See more...
I have a dashboard which counts the number of times a user performed an action.  I have 3 time frames (last 24h, 7d, 30d) and thought I would try using  three base searches since I do more transformation with each "set" of data. For testing, I left my 3 original panels and wrote 3 new base searches and added new panels to use those.  I noticed that the results from my original searches and the base searches don't match up.  But if you click the magnifying glass, "Open in Search" then it matches the non-base search.  The 24h search matches the non-base but the other two (7d & 30d) don't and I have no idea why. My XML   <dashboard> <label>base_testing</label> <search id="base24h"> <query> index=foo | fields _time file user </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <search id="base7d"> <query> index=foo | fields _time file user </query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <search id="base30d"> <query> index=foo | fields _time file user </query> <earliest>-30d@h</earliest> <latest>now</latest> </search> <row> <panel> <title>24 hours</title> <table> <search> <query>index=foo | fields _time file user | stats dc(file) AS "File Count" by user | sort - "File Count" | head 20</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="count">5</option> <option name="drilldown">none</option> </table> </panel> <panel> <title>7 days</title> <table> <search> <query>index=foo | stats dc(file) AS "File Count" by user | sort - "File Count" | head 20</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <option name="count">5</option> <option name="drilldown">none</option> </table> </panel> <panel> <title>30 days</title> <table> <search> <query>index=foo | stats dc(file) AS "File Count" by user | sort - "File Count" | head 20</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <option name="count">5</option> <option name="drilldown">none</option> </table> </panel> </row> <!-- BASE ROW --> <!-- BASE ROW --> <!-- BASE ROW --> <row> <panel> <title>BASE 24 hours</title> <table> <search base="base24h"> <query>| stats dc(file) AS "File Count" by user | sort - "File Count" | head 20</query> </search> <option name="count">5</option> <option name="drilldown">none</option> </table> </panel> <panel> <title>BASE 7 days</title> <table> <search base="base7d"> <query>| stats dc(file) AS "File Count" by user | sort - "File Count" | head 20</query> </search> <option name="count">5</option> <option name="drilldown">none</option> </table> </panel> <panel> <title>BASE 30 days</title> <table> <search base="base30d"> <query>| stats dc(file) AS "File Count" by user | sort - "File Count" | head 20</query> </search> <option name="count">5</option> <option name="drilldown">none</option> </table> </panel> </row> </dashboard>    
Hi folks, we deployed an updated outputs.conf across our enterprise to get the forwarders to report to new index servers. I have a forwarder that according to its own logs, is still reporting to the ... See more...
Hi folks, we deployed an updated outputs.conf across our enterprise to get the forwarders to report to new index servers. I have a forwarder that according to its own logs, is still reporting to the old indexers. The debug tool says it is configured properly for the new indexers. I have disabled deployment client. I can't find any reference to the old servers anywhere in the config, yet it is still reporting to the old servers after multiple restarts. 
Hello, I just installed Splunk Enterprise on Linux on an AWS EC2 instance. It looked like it works, I can start splunk and login as the admin user. But if I go to the indexes screen using Settings ... See more...
Hello, I just installed Splunk Enterprise on Linux on an AWS EC2 instance. It looked like it works, I can start splunk and login as the admin user. But if I go to the indexes screen using Settings --> Indexes the screen that appears is blank, apart from the black menu bar across the top. There is no sign of an error, just nothing - screenshot is below. I'm new to Splunk, can you suggest where to look for the error? Or is this a familiar error? Thanks, Dave    
Hello How can I return results, for all events, where a field value is the same in two of those events.  For example, on a website, I want only events were one user viewed two different pages and t... See more...
Hello How can I return results, for all events, where a field value is the same in two of those events.  For example, on a website, I want only events were one user viewed two different pages and the events of the page loads are different events. I don't want to specify the users, but rather return all the users that viewed both pages.    <base> page=abcd OR page=1234 AND  (eval if userids are equal)  
Good day Splunkers, Today doing an audit of my Alerts, I opened one in "Open Search" and immediately got "Server Error" upon trying to run it.  Checked Scheduler Logs and Alert is Successfully runni... See more...
Good day Splunkers, Today doing an audit of my Alerts, I opened one in "Open Search" and immediately got "Server Error" upon trying to run it.  Checked Scheduler Logs and Alert is Successfully running and issuing email as action.  It is a very lengthy beast for "PowerShell Command Execution" checking tons of IOC conditions.  I 'Ctrl-X'd' my way through script eventually arriving at the conclusion that it appears to be breaking on one particular search term.  Now this worked previously (when originally authored in version 7.x) and currently working via Scheduler so I have questions around why running in Scheduler works vs. in Open Search.... do they run differently?  Is this 7.x 8.x (Python 2 vs. 3 syntax) difference maybe?  We are running Enterprise 'All-in-one' Version:8.0.0 Build:1357bef0a7f6.  Below is first part of our query which is enough for you to test with... gist of it is the '-' in "Set-ExecutionPolicy" (worked before) now causes "Server Error". For testing simply add a wildcard in place of dash - change *Set-ExecutionPolicy* to *Set*ExecutionPolicy*.  Other dashes in query do not seem to affect this.  This is very weird behavior and has hurt my brain a bit this morning.  Please advise.... index=security OR index=wineventlog source="WinEventLog:Microsoft-Windows-PowerShell/Operational" EventCode=4103 OR EventCode=4104 OR EventCode=4688 OR EventCode=24577 Message=*Set-ExecutionPolicy*   Best regards, Greg    
I'm trying to track the elapsed time it takes a user to complete a web application based on the earliest and latest occurrences of specific messages.  The end goal is to have two separate charts, the... See more...
I'm trying to track the elapsed time it takes a user to complete a web application based on the earliest and latest occurrences of specific messages.  The end goal is to have two separate charts, the first, a scatterplot with the date on the X axis and the elapsed times to complete on the Y axis (multiple values present each day); the second chart almost the same but an average of the elapsed time on the Y axis rather than all values.   My query thus far looks like: sourcetype="PCF:log" cf_app_name=myApp (msg="*launch configuration*" OR msg="*complete for Account Owner*" ) | rex field=msg "AccountNum: (?<AccountNum>\w+)" | rex field=msg "UserId: (?<UserID>\w+)" | stats min(_time) as start_time max(_time) as end_time by AccountNum, UserID | eval time_to_complete = end_time-start_time | eval time_to_complete=strftime(time_to_complete, "%M:%S") The shortcomings are: 1) A user can re-launch after completing (launch, complete, launch) and so the min/max _times can be off in these scenarios. I need something like earliest("launch configuration) and latest("complete for Account Owner") but not sure on the syntax for that.  2) The OR condition for the logs returns entries where only launch happens, and never complete. I'd like to discard those from the calculations rather than have a bunch of elapsed time of 00:00 present.  3) I'm not totally sure that this query will track combinations of users/accounts as I need.  If User A launches with account 1 but never finishes, and user B launches with account 1 and does finish, I only (ideally) want the elapsed time for user B.   4) I'm generally struggling with how to incorporate the date component into a graph as well.   Sincerely appreciate help, and explanations welcome so I can better learn.
Hello , I'm trying to configure my props.conf for one of the files in which it has header. I don't have any props.conf configured yet, looking for help in configuring this. Thanks in advance exampl... See more...
Hello , I'm trying to configure my props.conf for one of the files in which it has header. I don't have any props.conf configured yet, looking for help in configuring this. Thanks in advance example logfile: Field1 Field2 Field3 Field4 Field5 Field6 Field7 ------+------+---------------------------+---------------------------+--------------------------- 0 1 6/16/20 18:35:23:193 EDT 6/16/20 18:35:23:193 EDT xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 1 1 6/16/20 18:35:23:216 EDT 6/16/20 18:35:23:216 EDT yyyyyyyyyyyyyyyyyyyyyy 2 1 6/16/20 18:35:23:285 EDT 6/16/20 18:35:23:285 EDT zzzzzzzzzzzzzzzzz
Hi! I am trying to rename field values from a number to a string like this: I calculated whether the project has been past due or not by subtracting today's date minus project ask date:  | eval "Pa... See more...
Hi! I am trying to rename field values from a number to a string like this: I calculated whether the project has been past due or not by subtracting today's date minus project ask date:  | eval "Past/Future"=round(abs((relative_time(now(), "@d")-relative_time(strptime(project_ask_date,"%Y-%m-%d %H:%M:%S"), "@d"))/86400),0)  Now i want to rename the field values:  If the number is >0 then it is past due, otherwise the project's due date is in the future.  Thank you!