All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  The time is created from the below script | bin _time span=60 | eval Time1=strftime(_time+3600,"%A %H") | eval eventstart = strftime(_time, "%H") | eval eventend=01 | eval eventrange = ... See more...
Hello,  The time is created from the below script | bin _time span=60 | eval Time1=strftime(_time+3600,"%A %H") | eval eventstart = strftime(_time, "%H") | eval eventend=01 | eval eventrange = eventstart+(eventend) | eval eventrange=if(eventrange=-1, 23,eventrange ) | replace 0 with 00, 1 with 01, 2 with 02, 4 with 04, 5 with 05, 6 with 06, 7 with 07, 3 with 03, 9 with 09, 8 with 08 | eval Time2 = Time1.": [".eventstart."00 - ".eventrange."00] " So, the TIME format actually is Day TIME_TOHour [TIME_FROM - TIME_TO]
The .csv is actually used with”join”. However my question is related more to just finding a file, whether lookup or input. I don’t know if the .csv is an output of some other search, script or a file... See more...
The .csv is actually used with”join”. However my question is related more to just finding a file, whether lookup or input. I don’t know if the .csv is an output of some other search, script or a file loaded into splunk. Is there a way to find where it comes from if I know nothing but its name?
Hi @mythili, there's a timeout for a search execution. In addition you probably are an andmin and the user has different search settings. So you could hint to your colleague to run less contempora... See more...
Hi @mythili, there's a timeout for a search execution. In addition you probably are an andmin and the user has different search settings. So you could hint to your colleague to run less contemporary searches. I don't hint to give an higher number of executable searches to that role because you could have performance issues. You culd aso hint to your colleague to run this search in background, in this way he/she is secure that the search doesn't go in timeout. ciao. Giuseppe
Hello, first of all, sorry for my bad English, I hope you can understand everything. My goal is to get the journald logs from the universalforwarder in JSON format to Splunk. (Splunk/UF Version 9.1... See more...
Hello, first of all, sorry for my bad English, I hope you can understand everything. My goal is to get the journald logs from the universalforwarder in JSON format to Splunk. (Splunk/UF Version 9.1.2) I use the app jorunald_input. inputs.conf (UF)     [journald://sshd] index = test sourcetype = test journalctl-filter = _SYSTEMD_UNIT=sshd.service       I've tried different props.conf functions. For example, something like this:  props.conf (UF)     [test] INDEXED_EXTRACTIONS = json KV_MODE = json SHOULD_LINEMERGE=false #INDEXED_EXTRACTIONS =json #NO_BINARY_CHECK=true #AUTO_KV_JSON = true       On the UF I check with the command     ps aux | grep journalctl     whether the query is enabled. It displays this command     journalctl -f -o json --after-cursor s=a12345ab1abc12ab12345a01f1e920538;i=43a2c;b=c7efb124c33f43b0b0142ca0901ca8de;m=11aa0e450a21;t=233ae3422cd31;x=00af2c733a2cdfe7 _SYSTEMD_UNIT=sshd.service -q --output-fields PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE,MESSAGE     I can try it out by using this command in the cli But I have to take out that part "--after-cursor ...." So I run the following command on the CLI to keep track of the journald logs:     journalctl -f -o json _SYSTEMD_UNIT=sshd.service -q --output-fields PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE,MESSAGE     On the Universal forwarder, the tracked journald logs will then look like this:  (It would be a nice JSON format)     { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2c;b=a1aaa111a11aaa111aa000a0101;m=11aa00c5b9a0;t=233ae39a37aa2;x=00af2c733a2cdfe7", "__REALTIME_TIMESTAMP" : "1710831664593570", "__MONOTONIC_TIMESTAMP" : "27194940570016", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "Invalid user asdf from 111.11.111.111 port 111", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2d;b=a1aaa111a11aaa111aa000a0101;m=11aa00ec25bf;t=233ae39c9e6c0;x=10ac2c735c2cdfe7", "__REALTIME_TIMESTAMP" : "1710831667111616", "__MONOTONIC_TIMESTAMP" : "27194943088063", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): check pass; user unknown", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2e;b=a1aaa111a11aaa111aa000a0101;m=11aa00ec278a;t=233ae39c9e88c;x=5fb4c21ae6130519", "__REALTIME_TIMESTAMP" : "1710831667112076", "__MONOTONIC_TIMESTAMP" : "27194943088522", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2f;b=a1aaa111a11aaa111aa000a0101;m=11aa0108f5bf;t=233ae39e6b6c0;x=d072e90acf887129", "__REALTIME_TIMESTAMP" : "1710831668999872", "__MONOTONIC_TIMESTAMP" : "27194944976319", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a30;b=a1aaa111a11aaa111aa000a0101;m=11aa010e0295;t=233ae39ebc397;x=d1eb29e00003daa7", "__REALTIME_TIMESTAMP" : "1710831669330839", "__MONOTONIC_TIMESTAMP" : "27194945307285", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): check pass; user unknown", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a31;b=a1aaa111a11aaa111aa000a0101;m=11aa012f0b3c;t=233ae3a0ccc3e;x=c33e28a6111c89ea", "__REALTIME_TIMESTAMP" : "1710831671495742", "__MONOTONIC_TIMESTAMP" : "27194947472188", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a32;b=a1aaa111a11aaa111aa000a0101;m=11aa0135591b;t=233ae3a131a1d;x=45420f6d2ca07377", "__REALTIME_TIMESTAMP" : "1710831671908893", "__MONOTONIC_TIMESTAMP" : "27194947885339", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "PRIORITY" : "3", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "error: Received disconnect from 111.11.111.111 port 111:11: Unable to authenticate [preauth]" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a33;b=a1aaa111a11aaa111aa000a0101;m=11aa01355bee;t=233ae3a131cf0;x=15b1aa1201a45cdf", "__REALTIME_TIMESTAMP" : "1710831671909616", "__MONOTONIC_TIMESTAMP" : "27194947886062", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Disconnected from invalid user asdf 111.11.111.111 port 111 [preauth]" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a34;b=a1aaa111a11aaa111aa000a0101;m=11aa01355c42;t=233ae3a131d45;x=123f45a09e00a8a2", "__REALTIME_TIMESTAMP" : "1710831671909701", "__MONOTONIC_TIMESTAMP" : "27194947886146", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111", "_PID" : "1430615" }      (Example)    But when I look for the logs on the search head, they look like this:     Invalid user asdf from 111.11.111.111 port 111pam_unix(sshd:auth): check pass; user unknownpam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2pam_unix(sshd:auth): check pass; user unknownFailed password for invalid user asdf from 111.11.111.111 port 111 ssh2error: Received disconnect from 111.11.111.111 port 111:11: Unable to authenticate [preauth]Disconnected from invalid user asdf 111.11.111.111 port 111 [preauth]PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111       Does anyone know why the logs are written together and not to be considered individually? And why the logs are not in JSON format? Can anyone tell me a solution for this on how to fix the problem?   Thank you very much!
In our dashboard, a user reported that she got "Search was cancelled" message when she used it. I came to know that multiple searches were run concurrently. Hence, it could have been cancelled. But ... See more...
In our dashboard, a user reported that she got "Search was cancelled" message when she used it. I came to know that multiple searches were run concurrently. Hence, it could have been cancelled. But when I tried to reproduce this issue, the searches kept getting queued and ran but was never delayed.  What are the scenario when a search is queued vs when it is cancelled?
Hello @marnall, I appreciate your response, it did work for me after tuning it a bit  | spath input=_raw path=details output=hold | rex field=hold mode=sed "s/({\s*|\s*}|\s*)//g" | makemv hold ... See more...
Hello @marnall, I appreciate your response, it did work for me after tuning it a bit  | spath input=_raw path=details output=hold | rex field=hold mode=sed "s/({\s*|\s*}|\s*)//g" | makemv hold delim="," | mvexpand hold | rex field=hold "\"(?<key>[^\"]+)\":\"(?<value>[^\"]+)\"" | table orderNum key value orderLocation However, I have a follow-up on this. our admin has set some limitations for mvexpand command when I try to increase the search period for the last 3 days, I get this warning:  output will be truncated at 6300 results due to excessive memory usage. Memory threshold of 500MB as configured in limits.conf / [mvexpand] / max_mem_usage_mb has been reached. Is there any alternate way to achieve the same results without using mvexpand, considering that on average there can be more than 50-60 key-value present under the "details" of a single event, and there can be 40K events per 30 days (assuming the retention period of the events is 30 days)?
Hi. How can I change the background color of pie dynamically through drop-down selection ? Is it okay to look like this in the picture below? <form theme="dark"> <label>Test2</label> <fieldset... See more...
Hi. How can I change the background color of pie dynamically through drop-down selection ? Is it okay to look like this in the picture below? <form theme="dark"> <label>Test2</label> <fieldset submitButton="false" autoRun="true"></fieldset> <row> <panel> <input type="dropdown" token="color_select"> <label>Background</label> <choice value="#175565">Color1</choice> <choice value="#475565">Color2</choice> </input> <chart> <search> <query>index=p1991_m_tiltline_index_json_raw | top vin.number</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.backgroundColor">$color_selectiono$</option> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.fontColor">#99CCFF</option> <option name="charting.foregroundColor">#EBF5FF</option> <option name="charting.legend.placement">right</option> <option name="charting.seriesColors">[0xEBF0F5,0xC2D1E0,0x99B2CC,0x7094B8,0x4775A3,0x2E5C8A,0x24476B,0x1A334C,0x0F1F2E,0x050A0F]</option> <option name="trellis.enabled">0</option> <option name="trellis.size">large</option> <option name="height">300</option> </chart> </panel>
Hi @yh , as @PickleRick said, it's difficult to hint you without a defined Use Case. For this reason the best approach is to define your requirements, and then design your data structure to answer ... See more...
Hi @yh , as @PickleRick said, it's difficult to hint you without a defined Use Case. For this reason the best approach is to define your requirements, and then design your data structure to answer to these requirements. One additional hint: having more indexes and more Data Models depends on two factors: access grants, data retention. In other words, if all your users can access all data and data must be retained for the same time, there's no reason to have different indexes or Data Models, also because you have to manage them and use in searches, so I'd try to avoid to duplicate DMs if not mandatory for the requirements. Ciao. Giuseppe
The data will still be CIM compliant though. I am simply replicating the data model, so I have two different sets of data model (all the settings are similar but the whitelisted indexes in each data ... See more...
The data will still be CIM compliant though. I am simply replicating the data model, so I have two different sets of data model (all the settings are similar but the whitelisted indexes in each data model is different) By cloning the original Data Model from the CIM app, I have a DMZ Network data model = Only Index for DMZ  Zone A Network data model = Only Index for Zone A At that time, my goal was to provide ease for the users to display the dashboard but simply swapping the data model in use. Cause DMZ and Zone A is highly unique between one another So would the best practice be just to put all in one common data model, eg Default network data model = all indexes and then try to separate out the zones by using filters like "where" in the search queries.
Unless you have a very specific use case you don't want to touch the CIM datamodels. They are the "common wisdom" and many existing searches and apps rely on the CIM being properly implemented and da... See more...
Unless you have a very specific use case you don't want to touch the CIM datamodels. They are the "common wisdom" and many existing searches and apps rely on the CIM being properly implemented and data being CIM-compliant. Question is what would be the goal of modifying the datamodels?
We had a Splunk Enterprise installation (9.2.0.1) on Windows Server 2019, and upgraded to Windows Server 2022 today. Splunk is only set up  for local event log collection; events forwarded from othe... See more...
We had a Splunk Enterprise installation (9.2.0.1) on Windows Server 2019, and upgraded to Windows Server 2022 today. Splunk is only set up  for local event log collection; events forwarded from other workstations. The Windows subscription & forwarded events are working, but Splunk isn't ingesting newer logs since the inplace upgrade to Server 2022. I can't seem to access Splunk's Event Log Collection settings since the upgrade either, and am met with a "Permission error". I have restarted the server fully. Am tempted to re-install Splunk as well. Any ideas?   Edit: Running with free Splunk Enterprise license (<500MB / day ingestion). Service is run with separate domain user service account. Only used to ingest local event logs that have been forwarded from other workstations. Can't see any other configuration which has changed. inputs.conf [default] host = <servername> [WinEventLog://ForwardedEvents] disabled = false index = applocker renderXml = true blacklist = 111
Adding to what's already been said, your search terms are very ineffective. 1. Searching for "* something *" makes no sense since space is a major segmenter and you can just search for "something". ... See more...
Adding to what's already been said, your search terms are very ineffective. 1. Searching for "* something *" makes no sense since space is a major segmenter and you can just search for "something". 2. Searching for terms wildcarded at the beginning (like "*something") is very ineffective since Splunk cannot use its internal index structures and has to dig through the raw data of all the events to find matches to your search terms.
Hello, I have been working on Splunk for a few months now, and we are using Splunk mainly for Cyber Security monitoring. I am wondering with regards to data model (CIM) should I create separate data... See more...
Hello, I have been working on Splunk for a few months now, and we are using Splunk mainly for Cyber Security monitoring. I am wondering with regards to data model (CIM) should I create separate data model for different zones, or should I combine all in a single data model. For example, I have 3 different independent network zones, DMZ, Zone A and Zone B. Each of those zones will have multiple indexes linked to it. Shall I actually use the default data model in CIM, eg datamodel=Authentication with all the indexes in DMZ, ZoneA and ZoneB, or should I make copies of datamodel? Scenario 1: If I use a common data model, I will use where source=xxx for example to try to split things out for my queries and dashboarding. Scenario 2: If I use a separate data model, I will have, datamodel= DMZ_Authentication, datamodel=ZoneA_Authentication and perhaps use append when I need to see the overall picture? Still confused on which is the best approach.
@harshal_chakran @suresh401    One of our salesforce security guys found a workaround that involves modifying a few python scripts under the lib folder. There are two methods, long polling and web... See more...
@harshal_chakran @suresh401    One of our salesforce security guys found a workaround that involves modifying a few python scripts under the lib folder. There are two methods, long polling and web sockets. Long polling was applicable to us so we just fixed that. Some information on the usage of proxy settings in aiohttp can be found here: Advanced Client Usage — aiohttp 3.9.3 documentation The fixes can be applied to the TA-sfdc-streaming-api pack and below is what we modified to successfully subscribe via a proxy.   1. Modify /opt/splunk/etc/apps/TA-sfdc-streaming-api/lib/aiocometd/transports/long_polling.py search for one instance of "session.post" and add  ,proxy="http://<proxyip>:<port>" 2. Modify /opt/splunk/etc/apps/TA-sfdc-streaming-api/lib/aiosfstream/auth.py search for two instances of "session.post" and add ,proxy="http://<proxyip>:<port>"     Hope this helps!  
FYI, for those seeking a workaround:   One of our salesforce security guys found a workaround that involves modifying a few python scripts under the lib folder.  There are two methods, long pollin... See more...
FYI, for those seeking a workaround:   One of our salesforce security guys found a workaround that involves modifying a few python scripts under the lib folder.  There are two methods, long polling and web sockets. Long polling was applicable to us so we just fixed that. Some information on the usage of proxy settings in aiohttp can be found here: Advanced Client Usage — aiohttp 3.9.3 documentation The fixes can be applied to the TA-sfdc-streaming-api pack and below is what we modified to successfully subscribe via a proxy.   1. Modify /opt/splunk/etc/apps/TA-sfdc-streaming-api/lib/aiocometd/transports/long_polling.py search for one instance of "session.post" and add  ,proxy="http://<proxyip>:<port>" 2. Modify /opt/splunk/etc/apps/TA-sfdc-streaming-api/lib/aiosfstream/auth.py search for two instances of "session.post" and add ,proxy="http://<proxyip>:<port>"     Hope this helps!  
What is your data and what is the visualisation you want to show. If it's a Splunk table, then look at this documentation for how to colour table cells https://docs.splunk.com/Documentation/SplunkCl... See more...
What is your data and what is the visualisation you want to show. If it's a Splunk table, then look at this documentation for how to colour table cells https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML Perhaps you can say more about what your data looks like, how many entries you are talking about and how to plan to visualise it.  
@yeahnah    One of our salesforce security guys found a workaround that involves modifying a few python scripts under the lib folder. There are two methods, long polling and web sockets. Long poll... See more...
@yeahnah    One of our salesforce security guys found a workaround that involves modifying a few python scripts under the lib folder. There are two methods, long polling and web sockets. Long polling was applicable to us so we just fixed that. Some information on the usage of proxy settings in aiohttp can be found here: Advanced Client Usage — aiohttp 3.9.3 documentation The fixes can be applied to the TA-sfdc-streaming-api pack and below is what we modified to successfully subscribe via a proxy.   1. Modify /opt/splunk/etc/apps/TA-sfdc-streaming-api/lib/aiocometd/transports/long_polling.py search for one instance of "session.post" and add  ,proxy="http://<proxyip>:<port>" 2. Modify /opt/splunk/etc/apps/TA-sfdc-streaming-api/lib/aiosfstream/auth.py search for two instances of "session.post" and add ,proxy="http://<proxyip>:<port>"     Hope this helps!  
Hi Splunkers, I'm currently working on customizing the Splunk login screen to change the logo, background, footer, etc. I referred to the Splunk documentation (https://docs.splunk.com/Documentation/... See more...
Hi Splunkers, I'm currently working on customizing the Splunk login screen to change the logo, background, footer, etc. I referred to the Splunk documentation (https://docs.splunk.com/Documentation/Splunk/9.1.3/AdvancedDev/CustomizeLogin) and successfully completed the customization. Now, the Splunk login screen displays my logo, background image, and footer content. However, I encountered an issue when running Splunk App Inspect on the custom-login app I created. The Splunk App Inspect tool reported 8 failures: The web.conf file contains a [settings] stanza, which is not permitted. Only [endpoint:] and [expose:] stanzas are allowed in web.conf. Line Number: 1 The 'local' directory exists, which is not allowed. All configuration should be in the 'default' directory. The 'local.meta' file exists, which is not permitted. All metadata permissions should be set in 'default.meta'. No README file was found, which should include version support, system requirements, installation, configuration, troubleshooting, and running of the app, or a link to online documentation. I'm wondering if the [endpoint:*] and [expose:*] stanzas in web.conf are necessary for customizing the login screen. Are these stanzas required for login screen changes? All other issues have been fixed. (for production environment) Below is the corrected version of the custom_login app structure based on the Splunk App Inspect recommendations: ``` custom_login |-- default | |-- app.conf | |-- web.conf | |-- restmap.conf | |-- savedsearches.conf | |-- ui_prefs.conf | |-- README | |-- data |-- metadata | |-- default.meta |-- static | |-- appIcon.png | |-- appIcon_2x.png | |-- appIconAlt.png | |-- appIconAlt_2x.png | |-- background.png | |-- fav_logo.png |-- bin | |-- readme.txt |-- appserver | |-- static | |-- background.png | |-- fav_logo.png ``` Here are the contents of the configuration files: **app.conf** ``` [launcher] author = NAME description = <<<<<XXXXXXXXYYYYYY>>>>>>. version = Custom login 1.0 [install] is_configured = 0 [ui] is_visible = 1 label = Custom app login [triggers] reload.web = web.conf reload.restmap = restmap.conf reload.ui_prefs = ui_prefs.conf ``` **restmap.conf** ``` # restmap.conf for custom_login [endpoint:login_background] # REST endpoint for login background image configurations match = /custom_login ``` **ui_prefs.conf** ``` [settings] loginBackgroundImageOption = custom loginCustomBackgroundImage = custom_login:appserver/static/background.png login_content = This is a server managed by my team. For any inquiries, please reach out to us at YYYY.com loginCustomLogo = custom_login:appserver/static/fav_logo.png loginFooterOption = custom loginFooterText = © 2024 XXXXXXX ``` **web.conf** ``` [endpoint:login] [expose:login_background] pattern = /custom_login methods = GET ``` I am currently working in a development environment. Any advice on how to proceed with these changes would be appreciated. Thanks in advance.
stats count as Value by Time cannot give you a non numeric value that you have in your lookup. but aside from that, if you have a lookup table with time and value then you can just lookup time in th... See more...
stats count as Value by Time cannot give you a non numeric value that you have in your lookup. but aside from that, if you have a lookup table with time and value then you can just lookup time in the lookup to get the values. In your stats by time, what is "time" in your example, is it a formatted time representation, the unprocessed Splunk _time field or the _time field with a | bin bucket alignment set. If, what you want is something along these lines search bla | bin _time span=1h | stats count by _time category | eval Time=strftime(_time, "%A %d:[%H%M - "). strftime(relative_time(_time, "+1h"), "%H:%M]") | eval Value=count." ".category | lookup lookup.csv Time Value OUTPUT Value as Found | where isnull(Found) so that if your Time field in the lookup is Day DayOfMonth [TIME_FROM - TIME_TO] and the Value is the expected count + the category (AA/BN/TY), then the above search just does a lookup and expects to find the value - if it does not, Found will be null.
Also note that transaction will be slow and can, if you have a lot of data, just give you wrong results, as it can be affected by memory usage and just discard results. transaction can often be avoi... See more...
Also note that transaction will be slow and can, if you have a lot of data, just give you wrong results, as it can be affected by memory usage and just discard results. transaction can often be avoided through stats, although it does take some extra steps to get your data to a state where you can use stats, but it's not obvious how you would do that here.