All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have prometheus running in my existing setup for infra monitoring. I want to forward prometheus logs/metrices to splunk. So that I can monitor it in Splunk Cloud UI. How can we do that. I am not ab... See more...
I have prometheus running in my existing setup for infra monitoring. I want to forward prometheus logs/metrices to splunk. So that I can monitor it in Splunk Cloud UI. How can we do that. I am not able to find proper documentation with steps to do that. My current infra is running on aws eks. Please share if anyone has documentation regarding this
Hello community, Can anyone advise if it's possible to delete my search history? I'd like to delete old searches that serve no value e.g., those that returned no results, failed (i.e., were test se... See more...
Hello community, Can anyone advise if it's possible to delete my search history? I'd like to delete old searches that serve no value e.g., those that returned no results, failed (i.e., were test searches while learning) or are duplicates etc. I've searched helps docs and forums without luck.  Thank you for your help in advance. Pietra
Hi, Is there any current instructions on how to disable this error message that I keep receiving. Where can I edit the conf file to disable this error. I'm currently learning on a few virtual machi... See more...
Hi, Is there any current instructions on how to disable this error message that I keep receiving. Where can I edit the conf file to disable this error. I'm currently learning on a few virtual machines using VMWare workstation and do not need a huge data limits in place just for training purposes.    "The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch."  
I found that I am the only user who has this situation. My role is admin. I thought it was a performance problem, but after solving the performance problem, I still can't run the real-time search, bu... See more...
I found that I am the only user who has this situation. My role is admin. I thought it was a performance problem, but after solving the performance problem, I still can't run the real-time search, but the scheduled search can run. How do I get myself to run a real-time search?    
Hi All, Could you please help in extracting the error log from java error log. I would like to see the result in a table format Code | Message 1234 | due to system error Error log is as be... See more...
Hi All, Could you please help in extracting the error log from java error log. I would like to see the result in a table format Code | Message 1234 | due to system error Error log is as below message: Exception Occurred ::org.springframework.web.client.HttpClientErrorException$BadRequest: 400 Bad Request: [{"code":"1234","reason":"due to system error.","type":"ValidationException"}] at org.springframework.web.client.HttpClientErrorException.create(HttpClientErrorException.java:303) at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:384) at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:325) ...... I have tried few extractions from splunk searches, however nothing were fruitful.
On both the business transaction page and also on dashboards I am having a problem with the HTML date control in the upper right of the screen. My issue is that the time will not stick in the in the... See more...
On both the business transaction page and also on dashboards I am having a problem with the HTML date control in the upper right of the screen. My issue is that the time will not stick in the in the control. If I use the drop down to set it to, e.g., 10 am, with will jump to 7 pm. If I set it to 6 pm, it might jump to 4 am. I think if I let my sessions completely expire and then come back into the app, the control starts working again. But usually every time while I am working it starts to get into the above messed up state, and I am unable to fix it and get the dates I want. It seems like a JavaScript error, but I think my personal session also might be retaining old dates or something like that.  Developer tools is giving me a lot of these errors: js-lib-body-concat.js?7927b71ebc99ebb4583e4909474ca133:120 Error: [$rootScope:infdig] http://errors.angularjs.org/1.5.11/$rootScope/infdig?p0=10&p1=%5B%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-01T21%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T20%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T15%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T14%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-01T21%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T20%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T15%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T14%3A00%3A00.000Z%22%7D%5D%2C%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-01T22%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T21%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T16%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T15%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-01T22%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T21%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T16%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T15%3A00%3A00.000Z%22%7D%5D%2C%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-01T23%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T22%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T17%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T16%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-01T23%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T22%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T17%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T16%3A00%3A00.000Z%22%7D%5D%2C%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T00%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T23%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T18%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T17%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T00%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-01T23%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T18%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T17%3A00%3A00.000Z%22%7D%5D%2C%5B%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T01%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T00%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22fn%3A%20function(c%2Ce%2Cf%2Cg)%7Bf%3Dd%26%26g%3Fg%5B0%5D%3Aa(c%2Ce%2Cf%2Cg)%3Breturn%20b(f%2Cc%2Ce)%7D%22%2C%22newVal%22%3A%222022-07-02T19%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T18%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T01%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T00%3A00%3A00.000Z%22%7D%2C%7B%22msg%22%3A%22dateOutput%22%2C%22newVal%22%3A%222022-07-02T19%3A00%3A00.000Z%22%2C%22oldVal%22%3A%222022-07-02T18%3A00%3A00.000Z%22%7D%5D%5D at js-lib-body-concat.js?7927b71ebc99ebb4583e4909474ca133:7:426 at m.$digest (js-lib-body-concat.js?7927b71ebc99ebb4583e4909474ca133:146:243) at m.$apply (js-lib-body-concat.js?7927b71ebc99ebb4583e4909474ca133:148:363) at ViewUtil.safeApply (shared.webpack.min.js?7927b71ebc99ebb4583e4909474ca133:1:520742) at HTMLDocument.<anonymous> (MainAppModuleCode.webpack.min.js?7927b71ebc99ebb4583e4909474ca133:1:4740708) at HTMLDocument.dispatch (js-lib-head-concat.js?7927b71ebc99ebb4583e4909474ca133:14:42571) at v.handle (js-lib-head-concat.js?7927b71ebc99ebb4583e4909474ca133:14:40572) at e.invokeTask (js-lib-head-concat.js?7927b71ebc99ebb4583e4909474ca133:11:24058) at Object.onInvokeTask (vendor.js?7927b71ebc99ebb4583e4909474ca133:1:5774148) at e.invokeTask (js-lib-head-concat.js?7927b71ebc99ebb4583e4909474ca133:11:23979)
Can anyone offer any guidance on what fields would be considered 'required' for inserting a record into the TrackMe 'trackme_host_monitoring' lookup, and if any other supporting lookups would require... See more...
Can anyone offer any guidance on what fields would be considered 'required' for inserting a record into the TrackMe 'trackme_host_monitoring' lookup, and if any other supporting lookups would require insert/updates as well? We have been tasked with host monitoring, and have implemented TrackMe for a few indexes so far. Our manager wants us to check the TrackMe host activity against a 'source of truth'. For example, our Azure team uses a script to generate a list of all Azure hosts every night at midnight. We're monitoring that list and ingesting it into an index, after which we update a lookup table with the values we need. We figure that we can run a report each day that compares a list of hosts (in this case, Azure VMs, but this could apply to firewalls, etc.) from our 'source of truth' against the hosts present in TrackMe's trackme_host_monitoring lookup. The devil is in the details, but at the end of the day we figure we could insert the host into the TrackMe lookup if it wasn't present there. Any advice appreciated.
I'm fairly new to Splunk, so forgive me if this is an easy question. I'm trying to sum a field, and then sum as subset (top 10) of the same field so that I can get a percentage of the top 10 users ... See more...
I'm fairly new to Splunk, so forgive me if this is an easy question. I'm trying to sum a field, and then sum as subset (top 10) of the same field so that I can get a percentage of the top 10 users generating web traffic. I can get the individual searches to work no problem, but I can't get them to work together. Search 1: index=web category=website123 | stats sum(bytes) as total Search 2: index=web category=website123 | stats sum(bytes) as userTotal by userID | sort 10 -userTotal | stats sum(userTotal) as userTotal10 What I want to do is take those two results and do an eval percent=userTotal10/total*100 to give me a percentage. Essentially, I want to be able to show the percentage of traffic generated by the top 10 users. So far, I have not been able to figure out how to do that. Any help would be greatly appreciated.
Hi All,   I have integrated Splunk HEC with springboot .when i hit application and checked in splunk am unable to see logs in splunk search with given index .am using source type as log4j2    ... See more...
Hi All,   I have integrated Splunk HEC with springboot .when i hit application and checked in splunk am unable to see logs in splunk search with given index .am using source type as log4j2    below is my log4j2 xml file <?xml version="1.0" encoding="UTF-8"?> <Configuration> <Appenders> <Console name="console" target="SYSTEM_OUT"> <PatternLayout pattern="%style{%d{ISO8601}} %highlight{%-5level }[%style{%t}{bright,blue}] %style{%C{10}}{bright,yellow}: %msg%n%throwable" /> </Console> <SplunkHttp name="splunkhttp" url="https://localhost:8088" token="xxxx-xxxx-xxxx-xxxx" host="localhost" index="vehicle-api_dev" type="raw" source="http-event-logs" sourcetype="log4j" messageFormat="text" disableCertificateValidation="true" > <PatternLayout pattern="%m" /> </SplunkHttp> </Appenders> <Loggers> <!-- LOG everything at INFO level --> <Root level="info"> <AppenderRef ref="console" /> <AppenderRef ref="splunkhttp" /> </Root> </Loggers> </Configuration>   my pom.xml configurations related to splunk  <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-log4j2</artifactId> </dependency> <dependency> <groupId>com.splunk.logging</groupId> <artifactId>splunk-library-javalogging</artifactId> <version>1.8.0</version> <scope>runtime</scope> </dependency> <repositories> <repository> <id>splunk-artifactory</id> <name>Splunk Releases</name> <url>https://splunk.jfrog.io/splunk/ext-releases-local</url> </repository> </repositories>   I am unable to see logs    Can any one help me .     Thanks in advance
Hi folks, Below are the Architecture Multisite indexer cluster 8 peers with 1 CM search head cluster.   Site2 peers are reporting to search head when I searched index=_internal |stats count b... See more...
Hi folks, Below are the Architecture Multisite indexer cluster 8 peers with 1 CM search head cluster.   Site2 peers are reporting to search head when I searched index=_internal |stats count by splunk_server but if I search with particular index name like index=cisco, windows, linux then only site1 peers are reporting. in all the search head,  Note: 1. Indexer cluster is stable SF and RF met 2.  Connectivity to all the peers, CM is established. 3. All Peers are in the healthy state in the distributed search. 4. Search affinity is disabled 5. No errors related to any connectivity in the splunkd.log on the peers. Need help to rectify this issue. Thanks
Hello, I would like to develop a Splunk alert for one of the source where we are ingesting data using REST API by configuring the scripted input on our Heavy Forwarder, I wanted to set up an email ... See more...
Hello, I would like to develop a Splunk alert for one of the source where we are ingesting data using REST API by configuring the scripted input on our Heavy Forwarder, I wanted to set up an email alert when ever there is an interruption in data ingestion from the source. I am using the below search but not seeing any results. | tstats latest(_time) as latest where index=XYZ by source | eval recent = if(latest > relative_time(now(),"-10m"),1,0), realLatest = strftime(latest,"%c") | where recent=0 Can someone please help me with the search?   Thanks
I have a playbook using the Splunk "run query" action block with the "attach_result" action which adds the query results to the vault. Is there any way to download these results locally using the sam... See more...
I have a playbook using the Splunk "run query" action block with the "attach_result" action which adds the query results to the vault. Is there any way to download these results locally using the same playbook as opposed to manually navigating to each container and downloading the results? I have a scenario where I would like to download these files from the container as they run and then place them on a shared drive (or moving the file from the Phantom box to the shared drive would work great as well).   It seems like it should be simple, but I cannot figure out how to interact with this file using a playbook. Any help would be appreciated!     
Hi, As soon as an event ends I want to create an alert and want to sent email with Shipment ID which is ended. Example log: EVENT GROUP A = Started en ended. 2022-12-20 10:43:04.468 +01:00 [Shipm... See more...
Hi, As soon as an event ends I want to create an alert and want to sent email with Shipment ID which is ended. Example log: EVENT GROUP A = Started en ended. 2022-12-20 10:43:04.468 +01:00 [ShipmentTransferWorker] **** Execution of Shipment Transfer Worker started. **** 2022-12-20 10:43:04.471 +01:00 [ShipmentTransferWorker] **** [Shipment Number: 000061015] **** 2022-12-20 11:06:19.097 +01:00 [ShipmentTransferWorker] **** Execution of Shipment Transfer Worker ended ****   EVENT GROUP B = Started end not ended yet. 2022-12-20 13:43:04.468 +01:00 [ShipmentTransferWorker] **** Execution of Shipment Transfer Worker started. **** 2022-12-20 13:43:04.471 +01:00 [ShipmentTransferWorker] **** [Shipment Number: 000061016] **** My SPL   index=app sourcetype=MySource host=MyHost "ShipmentTransferWorker" | eval Shipment_Status =if(like(_raw, "%Execution of Shipment Transfer Worker started%"),"Started", if(like(_raw, "%Execution of Shipment Transfer Worker ended%"), "Ended", NULL)) | transaction host startswith="Execution of Shipment Transfer Worker started" endswith="Execution of Shipment Transfer Worker ended" keepevicted=true | rex "Shipment Number: (?<ShipmentNumber>\d*)" | eval Shipment_Status_Started =if(like(_raw, "%Execution of Shipment Transfer Worker started%"),"Started", NULL) | eval Shipment_Status_Ended = if(like(_raw, "%Execution of Shipment Transfer Worker ended%"), "Ended", NULL) | table ShipmentNumber Shipment_Status_Started Shipment_Status_Ended      suppose that EVENT GROUP B ends with following event after 6 hours and then I want to create an Alert and mail with shipment number 000061016: 2022-12-20 19:43:19.097 +01:00 [ShipmentTransferWorker] **** Execution of Shipment Transfer Worker ended **** How can I create trigger and email once the event ends?  
Hi Splunk Experts, Im looking for help in splitting a table grouped into single row into multiple rows. I would like identify the filesystems that are above 40% and would like to collect stats and ... See more...
Hi Splunk Experts, Im looking for help in splitting a table grouped into single row into multiple rows. I would like identify the filesystems that are above 40% and would like to collect stats and visuals. The Statistics for table is displayed as single row only. I tried mvexpand but it doesnt accept 2 fields, only accepts one field. If i apply for field but it generates many rows. Im missing something here. Can you please help me with workaround. Splunk Query: ---------------- index=lab_env host=labhmc earliest=-4h latest=now | spath path=hmc_info{} output=LIST | rename LIST as _raw | kv | rex field="hmc_info{}.fs_utilization" mode=sed "s/\%//g" | table hmc_name hmc_info{}.Filesystem hmc_info{}.fs_utilization   Splunk Event: --------------- {"category": "hmc", "hmc_name": "labhmc", "hmc_uptime": "73", "hmc_data_ip": "127.0.0.1", "hmc_priv_ip": "127.0.0.1", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/dev", "fs_utilization": "0%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/dev/shm", "fs_utilization": "1%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/run", "fs_utilization": "3%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/sys/fs/cgroup", "fs_utilization": "0%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/", "fs_utilization": "46%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/data", "fs_utilization": "2%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/home", "fs_utilization": "4%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/extra", "fs_utilization": "17%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/dump", "fs_utilization": "1%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/var", "fs_utilization": "14%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/var/hsc/log", "fs_utilization": "25%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/run/user/601", "fs_utilization": "0%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/run/user/604", "fs_utilization": "1%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/run/user/600", "fs_utilization": "0%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"}
I am trying to create an after hour query with specific time frames 1. Mon 0000-0700 and 1900-2400, 2. Tue 0000-0700 and 1900-2400, 3. Wed 0000-0700 and 1900-2400, Thur 0000-0700 and 1900-2400, Fri 0... See more...
I am trying to create an after hour query with specific time frames 1. Mon 0000-0700 and 1900-2400, 2. Tue 0000-0700 and 1900-2400, 3. Wed 0000-0700 and 1900-2400, Thur 0000-0700 and 1900-2400, Fri 0000-0700 and 1900-2400, Sat 0000-2400, and Sun 0000-2400. I have my Cron Express set for 43 10***  | sort - _time | eval user=lower(user) |eval Day=strftime(_time,”%A”) |eval Hour=strftime(_time,”%H”) |eval Date=strftime(_time,”Y-%m-%d”) | search Hour IN (19,20,21,22,23,24,0,1,2,3,4,5,6,7) | table Date, Day, Hour, “User Account” I like the way this is displayed but I cannot figure out how to combine this query with a weekend (FRI 1900-Mon 0700) query. Or will I have to have two different queries? Once completed this will make a good dashboard. 
I am ingesting Azure Activity events via Splunk Add-on for Microsoft Cloud Services and was wondering if there are any recommendations/best practices for the settings for Max Time Wait and Max Batch ... See more...
I am ingesting Azure Activity events via Splunk Add-on for Microsoft Cloud Services and was wondering if there are any recommendations/best practices for the settings for Max Time Wait and Max Batch Size? Thx
Hey there! I'm trying to monitor(batch)) a folder congaing  xml files,  the XML files don't necessarily have the same structure, also they have multiple hierarchy and the level of it might vary .... See more...
Hey there! I'm trying to monitor(batch)) a folder congaing  xml files,  the XML files don't necessarily have the same structure, also they have multiple hierarchy and the level of it might vary . where and how do i configure a sourcetype the know's how to handle this kind of a case so i won't have to parse the data with rex on search time.   example for a file that may exists:  
Hey! So I have a a self host splunk enterprise environment with a Cluster Master(deployment server is a separate instance) and 3 indexers. I am trying to push apps to my indexers but when i change th... See more...
Hey! So I have a a self host splunk enterprise environment with a Cluster Master(deployment server is a separate instance) and 3 indexers. I am trying to push apps to my indexers but when i change the Master-apps folder and add in the applications and then run a 'splunk cluster-apply' i get the following our put.  No new bundle will be pushed. The cluster manager and peers already have this bundle with bundleId=9FE2BF9FC21C0681C01644653BD69C6C.  Before this I did push a single app and it worked fine then I removed it. Now I am trying to push multiple apps and getting the output above
Good Morning, I'm having trouble converting a whole number to a decimal.  Example:     | eval Amount = round(tonumber(balance_amount), 2) Result: 814118225.00     But I need the... See more...
Good Morning, I'm having trouble converting a whole number to a decimal.  Example:     | eval Amount = round(tonumber(balance_amount), 2) Result: 814118225.00     But I need the number to look like:  8141182.25
Hi at all, did anyone experienced the "Dismiss Azure Alert" Workflow Action in the Splunk Add-on for Microsoft Azure App runned by Enterprise Security? I have to configure it but I never did it. ... See more...
Hi at all, did anyone experienced the "Dismiss Azure Alert" Workflow Action in the Splunk Add-on for Microsoft Azure App runned by Enterprise Security? I have to configure it but I never did it. reading documentation, it seems that it's all configured and it should run without any problem. Does it request some special configuration or something that requests a special attention? Thank you for your time. Ciao. Giuseppe