All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm getting this error after upgrading Microsoft 365 app in Splunk  error - Error in 'SearchParser': The search specifies a macro 'm365_default_index' that cannot be found. Reasons include: the ma... See more...
I'm getting this error after upgrading Microsoft 365 app in Splunk  error - Error in 'SearchParser': The search specifies a macro 'm365_default_index' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information.
Hello, I have  JSON source where one of the fields has an escape character in the field name. Well actually I cannot see that in highlighted mode or in the extracted field list, it appears as "Repor... See more...
Hello, I have  JSON source where one of the fields has an escape character in the field name. Well actually I cannot see that in highlighted mode or in the extracted field list, it appears as "Report Refresh Date", just in raw view: "\ufeffReport Refresh Date".  Naturally, the searches are not working with what appears as the "Report Refresh Date". If I select the field it appears as " 'red dot' Report Refresh Date", the search work, but can not be saved - it's "forget" the red dot, as any copy/ paste attempt does out of the search bar. How can I rid of this? By adding  FIELDALIAS-alias1 = "\ufeffReport Refresh Date" AS "Report_Refresh_Date" to props.conf would help?
Hi there, Is it possible to forward the logs from event hub in azure cloud which is not publicly accessible into Splunk via the Microsoft cloud service add-on? Thanks in advance!
Hi all, not a question, just an attempt to save other ppl some time. We had the need of calculating the score of CVSS3.1 Vector strings. We first used an external python script but that comes with ... See more...
Hi all, not a question, just an attempt to save other ppl some time. We had the need of calculating the score of CVSS3.1 Vector strings. We first used an external python script but that comes with a cost. Hence I decided to implement a cvss calculator in SPL. It's already tested against a wide range of vector strings and seems pretty robust.  here is what we created as a macro named `cvss31` https://github.com/niko31337/splcvss31/blob/main/cvss31.macro to call the macro and get the relevant fields, you do something like this:       | makeresults | eval cvss_vector_string="CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H/E:P/RL:O/RC:C/CR:M/IR:M/AR:H/MUI:R" `cvss31` | table cvss_vector_string, main_score, main_rating, base_score, temporal_score, environmental_score, cvss_error       and you end up with this result: Make sure you always check for errors in the cvss_error field... cheers, Niko  
Hello experts, I'm trying to deploy splunk via the Nutanix calm blueprint. I could not install it successfully with LOGON_USERNAME & LOGON_PASSWORD parameters. I was only able to install it with th... See more...
Hello experts, I'm trying to deploy splunk via the Nutanix calm blueprint. I could not install it successfully with LOGON_USERNAME & LOGON_PASSWORD parameters. I was only able to install it with this powershell script. Start-Process -Filepath "c:\_Splunk\Splunk_Silent.bat" -Wait   Inside the bat file msiexec.exe /I splunkforwarder-7.3.0-657388c7a488-x64-release-64bit.msi DEPLOYMENT_SERVER="<ipadd>:8089" LAUNCHSPLUNK=1 AGREETOLICENSE=Yes /quiet   But when I include the LOGON_USERNAME & LOGON_PASSWORD parameters that's where the installation fails.  
Hi @gcusello  Need one more help, from the below log, i am able to remove all the wild characters using below script, but need to retain the commas in the texts (in italics) between  Message and Det... See more...
Hi @gcusello  Need one more help, from the below log, i am able to remove all the wild characters using below script, but need to retain the commas in the texts (in italics) between  Message and Details, similarly for Message and Success Appreciate the help | eval "EM"=if(isnotnull('cip:AuditMessage.MessageText'),'cip:AuditMessage.MessageText',"Data Not Found") | rex field=EM max_match=0 "(?<ErrMes>\w+)" | eval ErrorMessage = mvjoin (ErrMes, " ") | rex field=ErrorMessage Message\s+(?<ErrorResponse>.*)\s+Details\s+Message\s+(?<ErrorResponse2>.*)\s+Success {"@odata.context":"https://apistaging.payspace.com/odata/v1.0/11997/$metadata#Employee/$entity","Message":"The Nationality field is required., The Province field is required., The Code field is required., The Country field is required.","Details":[{"Message":"The Nationality field is required."},{"Message":"The Province field is required."},{"Message":"The Code field is required."},{"Message":"The Country field is required."}],"Success":false}    
Hello Splunkers!! I have two weeks events week 1 & week 2. Here I need to compare event of Week 1 & Week 2. The highlighted red one is the new event in Week 2. Like wise I have 100 of events in wee... See more...
Hello Splunkers!! I have two weeks events week 1 & week 2. Here I need to compare event of Week 1 & Week 2. The highlighted red one is the new event in Week 2. Like wise I have 100 of events in week 1 and week 2. If any new events comes in week 2  I need that result. Please let me know how to approach with this. Message Week 1 Week 2 Template:account/backToAccount Template:account/backToAccount "enableEnhancedCheckout" is not defined "enableEnhancedCheckout" is not defined   "product" is not defined Below is the query I have created so far : index="ABC" ("ERROR" OR "EXCEPTION") earliest=-7d latest=now() | rex field=_raw "Error\s(?<Message>.+)MulesoftAdyenNotification" | rex field=_raw "fetchSeoContent\(\)\s(?<Exception>.+)" | rex field=_raw "Error:(?<Error2>.+)" | rex field=_raw "(?<ErrorM>Error in template script)+" | rex field=_raw "(?ms)^(?:[^\\|\\n]*\\|){3}(?P<Component>[^\\|]+)" | rex "service=(?<Service>[A-Za-z._]+)" |rex "Sites-(?<Country>[A-Z]{2})" | eval Error_Exception_7d= coalesce(Message,Error2,Exception,ErrorM) | stats count by Error_Exception_7d | sort - count | appendcols [ search index="ABC" ("ERROR" OR "EXCEPTION") earliest=-14d latest=-8d | rex field=_raw "Error\s(?<Message>.+)MulesoftAdyenNotification" | rex field=_raw "fetchSeoContent\(\)\s(?<Exception>.+)" | rex field=_raw "Error:(?<Error2>.+)" | rex field=_raw "(?<ErrorM>Error in template script)+" | rex field=_raw "(?ms)^(?:[^\\|\\n]*\\|){3}(?P<Component>[^\\|]+)" | rex "service=(?<Service>[A-Za-z._]+)" |rex "Sites-(?<Country>[A-Z]{2})" | eval Error_Exception_14d= coalesce(Message,Error2,Exception,ErrorM) | stats count by Error_Exception_14d | sort - count] | stats count by Error_Exception_14d Error_Exception_7d
I'm using DB Connect app with MongoDB Other inputs are very nice but only one input has a problem..   I did search _internal log and then I encountered below error     [dw-63 - POST /api/... See more...
I'm using DB Connect app with MongoDB Other inputs are very nice but only one input has a problem..   I did search _internal log and then I encountered below error     [dw-63 - POST /api/connections/LifeRecord/querymetadata] ERROR c.s.d.s.a.s.d.impl.DatabaseMetadataServiceImpl - Unable to get query metadata java.sql.SQLException: Non-existing table referenced: exam at unity.parser.PTreeBuilderValidater.parseTableIdentifier(PTreeBuilderValidater.java:2551) at unity.parser.PTreeBuilderValidater.processTableRef(PTreeBuilderValidater.java:1203) at unity.parser.PTreeBuilderValidater.processTableReferences(PTreeBuilderValidater.java:1498) at unity.parser.PTreeBuilderValidater.ParseQuery(PTreeBuilderValidater.java:1113) at unity.parser.PTreeBuilderValidater.buildLQTree(PTreeBuilderValidater.java:967) at unity.parser.GlobalParser.parse(GlobalParser.java:101) at mongodb.jdbc.MongoPreparedStatement.parseQuery(MongoPreparedStatement.java:279) at mongodb.jdbc.MongoPreparedStatement.getMetaData(MongoPreparedStatement.java:249) at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.getMetaData(HikariProxyPreparedStatement.java) at com.splunk.dbx.connector.connector.impl.JdbcConnectorImpl.getPrepareStatementMetaData(JdbcConnectorImpl.java:271) at com.splunk.dbx.connector.connector.impl.JdbcConnectorImpl.getQueryMetadata(JdbcConnectorImpl.java:124) at com.splunk.dbx.server.api.service.database.impl.DatabaseMetadataServiceImpl.getQueryMetadata(DatabaseMetadataServiceImpl.java:130) at com.splunk.dbx.server.api.resource.ConnectionResource.getQueryMetadata(ConnectionResource.java:166) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) at org.glassfish.jersey.internal.Errors.process(Errors.java:292) at org.glassfish.jersey.internal.Errors.process(Errors.java:274) at org.glassfish.jersey.internal.Errors.process(Errors.java:244) at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394) at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319) at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205) at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:50) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1651) at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47) at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at com.splunk.dbx.server.api.filter.ResponseHeaderFilter.doFilter(ResponseHeaderFilter.java:30) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1638) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:567) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1377) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:507) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1292) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:249) at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54) at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:501) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:556) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:336) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:313) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:171) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:129) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:375) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938) at java.base/java.lang.Thread.run(Thread.java:829)     this error occured at restart the app   And then when arrived scheduled time, I encountered below error     [QuartzScheduler_Worker-10] ERROR org.easybatch.core.job.BatchJob - Unable to open record reader com.splunk.dbx.server.exception.ConfMigrationFailException: Fail to migrate conf input, stanza name: <input name>. at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.migrateInputConfiguration(DbInputRecordReader.java:104) at com.splunk.dbx.server.dbinput.recordreader.DbInputRecordReader.open(DbInputRecordReader.java:50) at org.easybatch.core.job.BatchJob.openReader(BatchJob.java:140) at org.easybatch.core.job.BatchJob.call(BatchJob.java:97) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)       I need help.. What's the matter in this situation? Other Scheduled Input Jobs are very nice
Hi, I add a drildown in a Dashboard. By this piece of code, I have a list of equipement in the drilldown |inputlookup lookup.csv |stats count by id |fields id Is it possible to add an option "Al... See more...
Hi, I add a drildown in a Dashboard. By this piece of code, I have a list of equipement in the drilldown |inputlookup lookup.csv |stats count by id |fields id Is it possible to add an option "All" like the filter in the Excel? If yes, how can I do it please? If no, is there any idea that I can do? Basically  I want to choose 1 single equipement or all equipement.   Thanks, Julia
Hello everyone, I am fairly new to Splunk and learning on the fly, so it would be super nice of someone can help me solve this issue or guide me how to deal with it for now. We had index cluste... See more...
Hello everyone, I am fairly new to Splunk and learning on the fly, so it would be super nice of someone can help me solve this issue or guide me how to deal with it for now. We had index cluster of 5 nodes in 2 sites.  Site1 has 2 nodes with 9.1 TB each and Site 2 has 3 nodes with 20TB each... don't ask me why. There was some confusion with the initial installation of the machines. So we have almost full disk in site 1 - 98% of storage used and around 50% in machines in site2. Yesterday we added another in instance to site 1 with 20TB of disk, but it does not seem to offload the other 2 in site1. What are our options here? Shall we run index rebalancing from the manager node? Every guidance will be much appreciated. Regards
I've seen a few posts on the subject, but I'd like to know how we can disable the multiple alerts throughout the maintenance window. For example, I'd like to disable alerts 1, 2, and 3 from Saturda... See more...
I've seen a few posts on the subject, but I'd like to know how we can disable the multiple alerts throughout the maintenance window. For example, I'd like to disable alerts 1, 2, and 3 from Saturday 11:30 p.m. until Sunday 6:00 a.m. Thank you in advance. ------------------------------------ reference alert query index=ABC sourcetype=XYZ ("Internal System Error") |stats count |where count >=30
I have the following fields, where some of them might be null, empty, whatnot values. I would like to split the Services values, which might have 1-N values separated by a comma, to separate columns... See more...
I have the following fields, where some of them might be null, empty, whatnot values. I would like to split the Services values, which might have 1-N values separated by a comma, to separate columns/fields prefixed with "Sp.". For example: | makeresults | eval Platform="p1", Ent="ent1", Ext="100", Fieldx=null(), Fieldy="" , Services="user,role,func1,func2" | append [ | makeresults | eval Platform="p1", Ent="ent2", Ext="100", Fieldx="", Fieldy=null(), Services="user2,role2,func4,func8,func5,role3" ] | fields _time Platform Ent Ext Fieldx Fieldy Services Gives an example like: _time Platform Ent Ext Filedx Fieldy Services 2022-09-30 08:56:11 p1 ent1 100     user,role,func1,func2 2022-09-30 08:56:11 p1 ent2 100     user2,role2,func4,func8,func5,role3   How do I split the Services into a separate fields? I think I cannot just use stats list() by "All_fields" due to those possible null values in other fields. _time Platform Ent Ext Fieldx Fieldy Services Sp.func1 Sp.func2 Sp.func4 Sp.func5 Sp.func8 Sp.role Sp.role2 Sp.role3 Sp.user Sp.user2 2022-09-30 09:07:00 p1 ent1 100     user,role,func1,func2 func1 func2       role     user   2022-09-30 09:07:00 p1 ent2 100     user2,role2,func4,func8,func5,role3     func4 func5 func8   role2 role3   user2  
I have below format log messages. At the end I want to group the messages by BID. I tried using the below query but I am not getting any events even though there are events that qualify my query.  ... See more...
I have below format log messages. At the end I want to group the messages by BID. I tried using the below query but I am not getting any events even though there are events that qualify my query.      { "details" : [ { "BID" : "123" }, { "BID" : "456" } ] }       Expected Output :    BID Count 123 4 456 3     Query I am using :   {my_search} | rex field=MESSAGE "(?<JSON>\{.*\})" | spath input=JSON path=details.BID | stats values(details.BID) as "BID" by CORRID | stats count as Count by "BID"    
Hello, I am using python script to read from remote api with pagination. I have one problem while reading data from api, once i started script and it pulls data after that if i disable the script t... See more...
Hello, I am using python script to read from remote api with pagination. I have one problem while reading data from api, once i started script and it pulls data after that if i disable the script the data does not get printed in splunk though it has passed through print statement.
Hello, So I have a forwarder installed on a server and it show up on Clients in Forwarder Management. Then I create new app in depployment-apps, with local/input.conf like this   [monitor:///... See more...
Hello, So I have a forwarder installed on a server and it show up on Clients in Forwarder Management. Then I create new app in depployment-apps, with local/input.conf like this   [monitor:///home/cnttm/Vibus/logTransit/application.log] crcSalt = <SOURCE> disable = false index = mynewindex [monitor:///home/cnttm/Vibus/logTransit/*.log] crcSalt = <SOURCE> disable = false index = mynewindex [monitor:///home/cnttm/Vibus/logTransit/*] crcSalt = <SOURCE> disable = false index = mynewindex [monitor:///home/cnttm/*] crcSalt = <SOURCE> disable = false index = mynewindex   The log directory is: /home/cnttm/Vibus/logTransit/application.log Then I create a server classes and apps, enable and restart it. But when I search index=mynewindex, I dont have any result, and I'm pretty sure we have log in that directory. Does anyone know anything wrong with my syntax? And how do I know/check if my deployment apps is working or not?
I have the following sample event 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU5  IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU4 IOS: 96 2022-09-2... See more...
I have the following sample event 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU5  IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU4 IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU3 IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU2 IOS: 96 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU1 IOS: 76 2022-09-29T19:29:22.260916-07:00 abc log-inventory.sh[24349]: GPU0 IOS: 96 I want to compare the IOS value  for each host and if any one is showing a different value then I want to output the result .In the above events for host=abc all GPU's has IOS value as 96 except for GPU1 which is 76.I want to output the GPU1 and the value of IOS...I tried doing diff but its not working.   Thanks in Advance  
Hello, I have a log file that go like this     2022-09-30 09:43:41,038: INSTANCE=34-bankgw1, REF=237324562, MESSSAGE=IST2InterfaceModel.ResponseVerifyCardFromBank:{"F0":"0210","F2":"970422xxx... See more...
Hello, I have a log file that go like this     2022-09-30 09:43:41,038: INSTANCE=34-bankgw1, REF=237324562, MESSSAGE=IST2InterfaceModel.ResponseVerifyCardFromBank:{"F0":"0210","F2":"970422xxxxxx6588","F3":"050000","F4":"000001000000","F7":"0930094340","F9":"00000001","F11":"277165","F12":"094340","F13":"0930","F15":"0930","F18":"7399","F25":"08","F32":"970471","F37":"273094277165","F38":"277165","F39":"15","F41":"00005782",0822,237324562,VNPAYCE","F49":"704","F54":"0000000000000000000000000000000000000000","F62":"EC_CARDVER","F63":"AAsA7QKwYzZX3AAB","F102":"0000000000000000"}     With a log structure like this, I can't really extract the field that I want with Splunk field extractor. The field that I want to Extract is F39 (which mean status) for monitoring purpose. I'm really amateur when it come to rex so can anyone help me with it?
Hello, Did anyone tried sending Moogsoft alerts/events to Splunk!   Thanks
I need to create a field (30days) with a date 30 days from the date in a given field (pubdate). I believe I have that part working, but can't seem to get the date to convert to the format I want. ... See more...
I need to create a field (30days) with a date 30 days from the date in a given field (pubdate). I believe I have that part working, but can't seem to get the date to convert to the format I want. |makeresults |eval pubdate="2022-09-30,2021-08-31" |makemv delim="," pubdate |mvexpand pubdate |eval epochtime=strptime(pubdate, "%Y-%m-%d") |eval 30days=epochtime + 2592000 |convert ctime(30days) |table pubdate, 30days Which produces: pubdate 30days 2022-09-30 10/30/2022 00:00:00.000000 2021-08-31 09/30/2021 00:00:00.000000   All I want to do is to format the 30days date field the same was as pubdate - "%Y-%m-%d". Everything I'm trying is producing an error.
Is cloud data stored in Canada?