All Topics

Top

All Topics

I have a few scheduled jobs running from an TA.  Multiple ones have | collect index=summary at the end of the SPL.  For some of them when they run I get 0 results with a warning "no results to summar... See more...
I have a few scheduled jobs running from an TA.  Multiple ones have | collect index=summary at the end of the SPL.  For some of them when they run I get 0 results with a warning "no results to summary index".  I reran the job manually and can see the results.  I can see there's a macro error in the job that did not have any results but the other job that ran has very similar SPL and works fine. When I looked at search.log the one thing that stood out is for the one that ran with results. This was in the log user context: Splunk-system-user The job that did not return results did not have "user context: Splunk-system-user" my question is what sets the user context and what overrides it (if possible) to see if this is the cause of my problems. thanks
The growing role of Cisco Cloud Observability in modern business operations, starting with Business Metrics Hey everyone, we have some exciting news to share! Our team has been working hard... See more...
The growing role of Cisco Cloud Observability in modern business operations, starting with Business Metrics Hey everyone, we have some exciting news to share! Our team has been working hard on a new feature — Business Metrics for Cisco Cloud Observability, recently released.  Want to see how your application performance ties up with your business outcomes? Or simplify your cloud environment? Maybe speed up problem-solving? Business Metrics is designed to help with these and more.  Gaining insights into Business Metrics helps provide a clear indication of how imperative some business transactions are over others. By knowing which transactions are directly influencing revenue, you can prioritize issues faster. With this new feature, you can also baseline processes to make strategic investment decisions in IT or application development that will improve areas such as customer experience. Curious about how all this works? We've collected information about this exciting feature and the potential benefits it offers for your IT operations. Read on — and don't forget to share your questions and impressions! Announcements Our General Manager and Senior Vice President Ronak Desai announced Business Metrics at AWS this week. Check out his post on the Cisco blog, Delivering application performance to maximize business KPIs about how this feature will not only greatly enhance business context for applications running on Amazon Web Services (AWS), but also how it fits into the rapidly changing technical landscape. Learn about Business Metrics We're hosting a webinar: From metrics to revenue: A deep dive into Cisco Cloud Observability. Come join us live on December 13 (Americas) and 14 (APAC and EMEA). Also, check out our Knowledge Base article, Using Business Metrics on Cisco Cloud Observability. It's a practical guide on when to use Business Metrics and how to configure and use it.  Get the details about Business Metrics in the Documentation.   For an official take, here's our Press Release on the Cisco Newsroom: Cisco Launches New Business Performance Insight and Visibility for Modern Applications on AWS.
Hello All, Do we have any method or workaround to export results of trellis layout in the visualization of dashboard to exported PDF? Any suggestions, inputs will be very helpful. Thank you Tar... See more...
Hello All, Do we have any method or workaround to export results of trellis layout in the visualization of dashboard to exported PDF? Any suggestions, inputs will be very helpful. Thank you Taruchit
I have a custom sourcetype that has the following advanced setting: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-... See more...
I have a custom sourcetype that has the following advanced setting: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-Z\-]+)_app  (?P<level>\w+)⏆(?P<controller>[^⏆]*)⏆(?P<thread>[^⏆]*)⏆((?P<flowId>[a-z0-9]*)⏆)?(?P<message>[^⏆]*)⏆(?P<exception>[^⏆]*)   I updated the regex to be slightly less restrictive about the white-space following the "_app" portion: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-Z\-]+)_app\s+(?P<level>\w+)⏆(?P<controller>[^⏆]*)⏆(?P<thread>[^⏆]*)⏆((?P<flowId>[a-z0-9]*)⏆)?(?P<message>[^⏆]*)⏆(?P<exception>[^⏆]*)   (So instead of matching on two-spaces exactly following `_app` we match on one or more white-spaces.) After saving this change, it appears Splunk cloud still uses the previous regex. (Events that include only a single space after "_app" don't get their fields extracted.) I thought perhaps I needed to wait a little while for the change to propagate, but I made the change yesterday and it still doesn't extract the fields today. Is there anything else I need to do to have the regex change take effect?
Hello,   I've the following situation: I've inside logs the ETL logs, I've already extracted some data via search fields. The log structure is the following: Fri Dec 1 16:00:59 2023 [extracted_p... See more...
Hello,   I've the following situation: I've inside logs the ETL logs, I've already extracted some data via search fields. The log structure is the following: Fri Dec 1 16:00:59 2023 [extracted_pid] [extracted_job_name] [extracted_index_operation_incremental] extracted_message Example Fri Dec 1 07:57:40 2023 [111111][talend_job_name] [100] End job Fri Dec 1 06:50:40 2023 [111111][talend_job_name] [70] Start job Fri Dec 1 06:50:39 2023 [111111][talend_job_name1] [69] End job Fri Dec 1 05:40:40 2023 [111111][talend_job_name1] [30] Start job Fri Dec 1 05:40:39 2023 [111111][talend_job_name2] [29] End job Fri Dec 1 02:50:40 2023 [111111][talend_job_name2] [1] Start job   Expected: PID          NAME                         EXEC_TIME 111111 talend_job_name 1h 7min 111111 talend_job_name1 1h 10min 111111 talend_job_name2 2h 50min   What I was requested to do is to extract a table containing the job name and the execution time, one for each pid (a job can be executed multiple times, but each time has a different PID) in order to have the data available. It is not necessary that the job starts with index 1, since all subjobs inside a job have a separated logged name (for example, the import all could contain 10 subjobs, each of one with different names) My idea of a query would be a query that involves the PID and the job name combined as primary key, considering the start time the lower extracted_index_operation_incremental for that specific PK and the end time the max value of extracted_index_operation_incremental for that PK. Any help?   Thanks for any reply.    
I am working on upgrading an instance of heavy forwarder that is running an out of support version of 7.3.3. In order to upgrade this to 9.0.1, is there another version level this must be upgraded to... See more...
I am working on upgrading an instance of heavy forwarder that is running an out of support version of 7.3.3. In order to upgrade this to 9.0.1, is there another version level this must be upgraded to prior to bringing it to version 9.0.1? I searched for upgrade path and no luck.    Thanks.
  Hello, we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B. We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Ind... See more...
  Hello, we need to patch the OS of our Splunk Enterprise cluster distributed on 2 sites, A & B. We will start the activity on site A, which contains one Deployer Server, two SH, one MN, three Indexer and three HF. Site B contains one SH, three Indexer and one HF and will be updated later. Considering that the patching of OS will require a restart of the nodes, can you please tell me Splunk Best Practice to restart the Splunk nodes? I'd start with the SH nodes then the Indexer nodes, Deployer, MN and HF. All one by one. Do I have to enable maintenance mode on each node, restart the node and disable maintenance mode, or is it sufficient to stop Splunk on each node and restart the machine? Thank you, Andrea
Hello Team, I got a weird issue, that I struggle to troubleshoot. A month ago, I realized that my WinEventLog logs were consuming too much of my licenses, so I decided to index them in the XmlWinEv... See more...
Hello Team, I got a weird issue, that I struggle to troubleshoot. A month ago, I realized that my WinEventLog logs were consuming too much of my licenses, so I decided to index them in the XmlWinEventLog format. To do this, I simply modified the inputs.conf file of my Universal Forwarder. I changed from this configuration : [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\sgroupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\sgroupPolicyContainer)" renderXml = false sourcetype = WinEventLog index = wineventlog To this configuration: [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\sgroupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\sgroupPolicyContainer)" renderXml = true sourcetype = XmlWinEventLog index = wineventlog Then I started receiving events and my license usage reduced, which made me happy. However, upon closer observation, I realized that I wasn't receiving all the events as before. Indeed, I now observe that the event frequency of the XmlWinEventLog logs is random. You can observe this on these timelines :   And in the metrics :   On the other hand, with the WinEventLog format, I have no issues:   I tried reinstalling the UF, there are no interesting errors in the splunkd.log, and I am out of ideas for troubleshooting. Thank you for your help.
Hello, can you please tell me what happens to email alerts if the smtp used for email delivery is temporary offline? Is there a buffer where alerts are saved and then are sent once the smtp server ... See more...
Hello, can you please tell me what happens to email alerts if the smtp used for email delivery is temporary offline? Is there a buffer where alerts are saved and then are sent once the smtp server becomes available again? Is there a link to Splunk documentation about that? Thank you, Andrea
Hi, I am trying to get the information how many datasources and endpoints we have Integrated in to splunk.How can we get this information can anyone pls provide me a query to find this ..
I don’t know if this is the right place to ask, but I’m currently looking for three members for BotS v7 coming 7th December in Tokyo.   if anyone interested, give me a reply to this post, or if ... See more...
I don’t know if this is the right place to ask, but I’m currently looking for three members for BotS v7 coming 7th December in Tokyo.   if anyone interested, give me a reply to this post, or if anyone knows the right place for me to look for members, greatly appreciated if you’d let me know!
CrowdStrike Falcon FileVantage Technical Add-On https://splunkbase.splunk.com/app/7090 When the api return more than one event, the result in splunk is one event with the all jsons merged toget... See more...
CrowdStrike Falcon FileVantage Technical Add-On https://splunkbase.splunk.com/app/7090 When the api return more than one event, the result in splunk is one event with the all jsons merged together making splunk json parsing to fail. For the python code it is seem to be what was wished with the join here  :         ~/etc/apps/TA_crowdstrike_falcon_filevantage/bin/TA_crowdstrike_falcon_filevantage_rh_crowdstrike_filevantage_json.py try: helper.log_info(f"{log_label}: Preparing to send: {len(event_data)} FileVantage events to Splunk index: {data_index}") --> events = '\n'.join(json.dumps(line) for line in event_data) filevantage_data = helper.new_event(source=helper.get_input_type(), index=helper.get_output_index(), sourcetype=helper.get_sourcetype(), data=events) ew.write_event(filevantage_data) helper.log_info(f"{log_label}: Data for {len(event_data)} events from FileVantage successfully pushed to Splunk index: {data_index}")           So it is important to make a proper splunk props.conf to un-split events with a LINE_BREAKER :           splunk@ncesplkpoc01:~/etc/apps/TA_crowdstrike_falcon_filevantage$ cat local/props.conf [crowdstrike:filevantage:json] SHOULD_LINEMERGE = false LINE_BREAKER = \n NO_BINARY_CHECK = true            
Hello, I wonder if there are plans to extend the MITRE ATTACK Framework coverage for ICS? How could someone build-upon what this SSE brings in features to add additional Framework elements? Any st... See more...
Hello, I wonder if there are plans to extend the MITRE ATTACK Framework coverage for ICS? How could someone build-upon what this SSE brings in features to add additional Framework elements? Any step-by-step guide that could be shared? Thanks, Mihaly
I have a saved search with 'n' number of results and I need to setup an alert mail for the results by creating an alert. If I use the |map "savedsearch", the result is no events found. But there is ... See more...
I have a saved search with 'n' number of results and I need to setup an alert mail for the results by creating an alert. If I use the |map "savedsearch", the result is no events found. But there is event in the result of the saved search. Please help me on this
Hi, Once a month we receive a file via email that we manually upload to Splunk as a lookup CSV file.  The current process is to delete the old file and to upload the new one, keeping the same file n... See more...
Hi, Once a month we receive a file via email that we manually upload to Splunk as a lookup CSV file.  The current process is to delete the old file and to upload the new one, keeping the same file name. The existing reports use this file without any issues. There is now a requirement to compare the current file with the previous version and highlight if any values have been added or removed (the columns stay the same). Initially I wanted to use the "inputlookup" and "collect" commands to output the data into an index and then build a search to compare the data based on the ingest time, effectively comparing the 2 files. However, I`m getting the following error: "The lookup table 'test.csv' requires a .csv or KV store lookup definition." The file actually exists and it`s located in "/opt/splunk/etc/apps/test_app/lookups/test.csv" The lookup definition also exists: "test_LD" I suspect this is caused by the size of the lookup file (approx. 36 MB) and wanted to ask for suggestions or workarounds ? Many thanks.
Hi  I'm trying to configure scs4 using the following documentation Quickstart Guide - Splunk Connect for Syslog . But when I run the sudo systemctl start sc4s command, I get errors during initializ... See more...
Hi  I'm trying to configure scs4 using the following documentation Quickstart Guide - Splunk Connect for Syslog . But when I run the sudo systemctl start sc4s command, I get errors during initialization: Please do you have any idea what's going on ? Knowing also that I've configured the podman http-proxy.conf file to add my proxy.  
How to store logs in minIO (on-premises) from Splunk. I created bucket named splunk. I successfully mc cp test.txt s3/splunk-bucket but splunk can't loads files into bucket. My indexes.conf fil... See more...
How to store logs in minIO (on-premises) from Splunk. I created bucket named splunk. I successfully mc cp test.txt s3/splunk-bucket but splunk can't loads files into bucket. My indexes.conf file: [smartstore] homePath = $SPLUNK_DB/smartstoredb/db coldPath = $SPLUNK_DB/smartstoredb/colddb thawedPath = $SPLUNK_DB/smartstoredb/thaweddb remotePath = volume:s3 [volume:s3] storageType = remote path = s3://splunk remote.s3.access_key = minioadmin remote.s3.secret_key = minioadmin remote.s3.supports_versioning = false remote.s3.endpoint = http://10.10.10.1:9000 minIO config.json config.json { "version": "10", "aliases": { "gcs": { "url": "https://storage.googleapis.com", "accessKey": "YOUR-ACCESS-KEY-HERE", "secretKey": "YOUR-SECRET-KEY-HERE", "api": "S3v2", "path": "dns" }, "local": { "url": "http://10.10.10.1:9000", "accessKey": "minioadmin", "secretKey": "minioadmin", "api": "s3v4", "path": "auto" }, "play": { "url": "http://10.10.10.1:9000", "accessKey": "minioadmin", "secretKey": "minioadmin", "api": "S3v4", "path": "auto" }, "s3": { "url": "http://10.10.10.1:9000", "accessKey": "minioadmin", "secretKey": "minioadmin", "api": "s3v4", "path": "auto" } } } ps: I have 3 indexers and cluster master
Cry for help! I installed an add on in Splunk, but he can't open it normally, only a white screen appears.My Splunk version is 9.0.4. How should I solve this problem? Thank all! Here is the  secti... See more...
Cry for help! I installed an add on in Splunk, but he can't open it normally, only a white screen appears.My Splunk version is 9.0.4. How should I solve this problem? Thank all! Here is the  section error logs in web_services.log     2023-12-01 10:16:41,411 ERROR [65694209637fbde458fdd0] startup:112 - Unable to read in product version information; [HTTP 401] Client is not authenticated 2023-12-01 10:16:41,412 INFO [65694209637fbde458fdd0] startup:139 - Splunk appserver version=UNKNOWN_VERSION build=000 isFree=False isTrial=True 2023-12-01 10:16:41,413 INFO [65694209637fbde458fdd0] i18n_catalog:46 - i18ncatalog: translations_retrieved=0.0004456043243408203 etag_calculated=4.3392181396484375e-05 overall=0.0004889965057373047 2023-12-01 10:16:41,413 ERROR [65694209647fbde459b3d0] startup:112 - Unable to read in product version information; [HTTP 401] Client is not authenticated 2023-12-01 10:16:41,415 INFO [65694209647fbde459b3d0] startup:139 - Splunk appserver version=UNKNOWN_VERSION build=000 isFree=False isTrial=True 2023-12-01 10:16:41,416 INFO [65694209647fbde459b3d0] _cplogging:216 - [01/Dec/2023:10:16:41] ENGINE Started monitor thread 'Monitor'. 2023-12-01 10:16:41,416 INFO [65694209647fbde459b3d0] root:168 - ENGINE: Started monitor thread 'Monitor'. 2023-12-01 10:16:41,427 ERROR [65694209647fbde459b3d0] config:149 - [HTTP 401] Client is not authenticated Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/config.py", line 147, in getServerZoneInfoNoMem return times.getServerZoneinfo() File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/times.py", line 163, in getServerZoneinfo serverStatus, serverResp = splunk.rest.simpleRequest('/search/timeparser/tz', sessionKey=sessionKey) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 625, in simpleRequest raise splunk.AuthenticationFailed splunk.AuthenticationFailed: [HTTP 401] Client is not authenticated 2023-12-01 10:16:45,150 ERROR [6569420d237fbde4dc8290] startup:112 - Unable to read in product version information; [HTTP 401] Client is not authenticated 2023-12-01 10:16:45,151 INFO [6569420d237fbde4dc8290] startup:139 - Splunk appserver version=UNKNOWN_VERSION build=000 isFree=False isTrial=True 2023-12-01 10:16:45,159 ERROR [6569420d237fbde4dc8290] config:149 - [HTTP 401] Client is not authenticated Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/config.py", line 147, in getServerZoneInfoNoMem return times.getServerZoneinfo() File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/times.py", line 163, in getServerZoneinfo serverStatus, serverResp = splunk.rest.simpleRequest('/search/timeparser/tz', sessionKey=sessionKey) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 625, in simpleRequest raise splunk.AuthenticationFailed splunk.AuthenticationFailed: [HTTP 401] Client is not authenticated         2023-12-01 10:36:53,327 INFO [656946c5357fdfe823efd0] error:321 - Masking the original 404 message: 'Nothing matches the given URI' with 'Page not found!' for security reasons 2023-12-01 10:36:53,329 INFO [656946c5347fdfe82389d0] error:321 - Masking the original 404 message: 'Nothing matches the given URI' with 'Page not found!' for security reasons 2023-12-01 10:36:53,342 INFO [656946c5357fdfe8216b90] startup:139 - Splunk appserver version=9.0.4 build=de405f4a7979 isFree=False isTrial=False 2023-12-01 10:36:53,430 INFO [656946c56c7fdfe818afd0] error:321 - Masking the original 404 message: 'Nothing matches the given URI' with 'Page not found!' for security reasons 2023-12-01 10:36:54,307 INFO [656946c64b7fdfe00b3e50] startup:139 - Splunk appserver version=9.0.4 build=de405f4a7979 isFree=False isTrial=False 2023-12-01 10:36:54,307 ERROR [656946c64b7fdfe00b3e50] utility:58 - name=javascript, class=Splunk.Error, lineNumber=3845, message=Uncaught TypeError: Cannot set properties of undefined (setting 'loadParams'), fileName=https://10.85.182.69:8000/zh-CN/manager/search/apps/local?msgid=5419270.9466664794685945 2023-12-01 10:36:54,307 ERROR [656946c64b7fdfe00b3e50] utility:58 - name=javascript, class=Splunk.Error, lineNumber=5, message=Uncaught TypeError: Cannot read properties of undefined (reading 'regional'), fileName=https://10.85.182.69:8000/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2/js/common.min.js         2023-12-01 10:37:37,961 INFO [656946f1f27fdfe8cddb50] error:321 - Masking the original 404 message: 'The path '/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2:1/app/qianxin-threat-intelligence-app/js/build/0.js' was not found.' with 'Page not found!' for security reasons 2023-12-01 10:37:37,963 INFO [656946f1f37fdfe80cb390] error:321 - Masking the original 404 message: 'The path '/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2:1/app/qianxin-threat-intelligence-app/js/build/3.js' was not found.' with 'Page not found!' for security reasons 2023-12-01 10:37:37,964 INFO [656946f1f37fdfe80b8c90] error:321 - Masking the original 404 message: 'The path '/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2:1/app/qianxin-threat-intelligence-app/js/build/1.js' was not found.' with 'Page not found!' for security reasons 2023-12-01 10:37:37,968 INFO [656946f1f57fdfe8c9a1d0] error:321 - Masking the original 404 message: 'The path '/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2:1/app/qianxin-threat-intelligence-app/js/build/4.js' was not found.' with 'Page not found!' for security reasons 2023-12-01 10:37:38,388 INFO [656946f2607fdfbc5dbad0] error:321 - Masking the original 404 message: 'The path '/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2:1/app/qianxin-threat-intelligence-app/js/build/5.js' was not found.' with 'Page not found!' for security reasons 2023-12-01 10:37:39,706 INFO [656946f3b27fdfe05e77d0] error:321 - Masking the original 404 message: 'The path '/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2:1/app/qianxin-threat-intelligence-app/js/build/1.js' was not found.' with 'Page not found!' for security reasons 2023-12-01 10:37:39,707 INFO [656946f3b27fdfbc533750] error:321 - Masking the original 404 message: 'The path '/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2:1/app/qianxin-threat-intelligence-app/js/build/5.js' was not found.' with 'Page not found!' for security reasons 2023-12-01 10:37:39,709 INFO [656946f3b17fdfe008eed0] error:321 - Masking the original 404 message: 'The path '/zh-CN/static/@0775A864B66952FFC07DAC805E2AAC735374D88D0EA5463E9E4CF36CF62A4344.2:1/app/qianxin-threat-intelligence-app/js/build/0.js' was not found.' with 'Page not found!' for security reasons         2023-12-01 10:59:07,462 INFO [65694bfb747fdfbc38f550] error:321 - Masking the original 404 message: 'The path '/en-US/static/app/search/$token_image_url$' was not found.' with 'Page not found!' for security reasons 2023-12-01 11:00:14,001 INFO [65694c3df97fdfbc5ccc50] startup:139 - Splunk appserver version=9.0.4 build=de405f4a7979 isFree=False isTrial=False 2023-12-01 11:00:14,072 INFO [65694c3df97fdfbc5ccc50] startup:139 - Splunk appserver version=9.0.4 build=de405f4a7979 isFree=False isTrial=False 2023-12-01 11:00:14,175 INFO [65694c3df97fdfbc5ccc50] cached:163 - /opt/splunk/etc/apps/search/appserver/static/setup.json 2023-12-01 11:00:14,437 INFO [65694c3df97fdfbc5ccc50] view:1137 - PERF - viewType=fastpath viewTime=0.2445s templateTime=0.0666s 2023-12-01 11:00:14,535 INFO [65694c3e807fdfe071d9d0] startup:139 - Splunk appserver version=9.0.4 build=de405f4a7979 isFree=False isTrial=False 2023-12-01 11:00:14,610 INFO [65694c3e957fdfe039f090] startup:139 - Splunk appserver version=9.0.4 build=de405f4a7979 isFree=False isTrial=False 2023-12-01 11:00:16,799 INFO [65694c40ca7fdfbc56c4d0] error:321 - Masking the original 404 message: 'The path '/en-US/static/app/search/$token_image_url$' was not found.' with 'Page not found!' for security reasons     such as this:     What problem caused the white screen to occur? If you could help me, I would be extremely grateful!  
Need AppDynamics lab for practicing the (EUM and Synthetic monitoring)End user monitoring and business analytics
I want to repeat same alert 3 times, 5 minutes apart like morning call. please let me know How can I do it. Can I organize the logic into queries? or is there any alert option for it?   this is m... See more...
I want to repeat same alert 3 times, 5 minutes apart like morning call. please let me know How can I do it. Can I organize the logic into queries? or is there any alert option for it?   this is my query for alert event.       index="main" sourcetype="orcl_sourcetype" | sort by _time | tail 1 | where CNT < 10