All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to get at the results of a phantom.act() action run, more specifically the Splunk HTTP app  "get file" action. Something as simple as:       # inside a custom code block def get_ac... See more...
I'm trying to get at the results of a phantom.act() action run, more specifically the Splunk HTTP app  "get file" action. Something as simple as:       # inside a custom code block def get_action_results(**kwargs): phantom.debug(action) phantom.debug(success) phantom.debug(results) phantom.debug(handle) return phantom.act('get file', parameters=[{'hostname': '', 'file_path': '/path/to/file'}], assets=["web_server"], callback=get_action_results)         The action will run as expected, however the callback isn't getting the results output. Am I misunderstanding callbacks in this scenario?
I want my send email action email body to be in table view as my search result. How do I pass dynamic token field value.  $result.name$ $result.index$$result.sourcetype$ how do I make fiel... See more...
I want my send email action email body to be in table view as my search result. How do I pass dynamic token field value.  $result.name$ $result.index$$result.sourcetype$ how do I make field value come side by side instead of below.  how I am getting now in my email body  name name2 name3 name4 index index2 index3 index4 sourcetype sourcetype2 sourcetype3 sourcetype4 I want to be like below. name index sourcetype name2 index2 sourcetype2 name3 index3 sourcetype3 name4 index4 sourcetype4 Is it possible to do
Hello, I am testing using SEDCMD on a single Splunk server architecture. Below is the current configuration which is put into /opt/splunk/etc/system/local/ - I am uploading a CSV file which contai... See more...
Hello, I am testing using SEDCMD on a single Splunk server architecture. Below is the current configuration which is put into /opt/splunk/etc/system/local/ - I am uploading a CSV file which contains (fake) individual data including two formats of SSN (xxx-xx-xxxx & xxxxxxxxx). The masking is not working when I upload the CSV file. Can someone help point me in the right direction? props.conf ### CUSTOM ### [csv] SEDCMD-redact_ssn = s/\b\d{3}-\d{2}-\d{4}\b/XXXXXXXXX/g   Included below is FAKE individual data pulled from the CSV file for testing: 514302782,f,1986/05/27,Nicholson,Russell,Jacki,3097 Better Street,Kansas City,MO,66215,913-227-6106,jrussell@domain.com,a,345389698201044,232,2010/01/01 505-88-5714,f,1963/09/23,Mcclain,Venson,Lillian,539 Kyle Street,Wood River,NE,68883,308-583-8759,lvenson@domain.com,d,30204861594838,471,2011/12/01
Thanks in Advance . I need to show status  If the P_RETURN_STATUS is success then it SUCCESS,IF error then ERROR ,IF P_RETURN_STATUS is error and P_MESSAGE is NO NEW BATCH EXISTS as SUCCESS .But al... See more...
Thanks in Advance . I need to show status  If the P_RETURN_STATUS is success then it SUCCESS,IF error then ERROR ,IF P_RETURN_STATUS is error and P_MESSAGE is NO NEW BATCH EXISTS as SUCCESS .But already the P_RETURN_STATUS having values as error .How to override when using AND condition     | eval P_RETURN_STATUS=case(like('P_RETURN_STATUS' ,"%SUCCESS%"),"SUCCESS", like('P_RETURN_STATUS',"%ERROR%"),"ERROR",like('P_MESSAGE',"%NO NEW BATCH EXISTS%") AND like('P_RETURN_STATUS',"%ERROR%"),"SUCCESS")      
Hello evryone,  I have a probleme with automatic lookup and role users.  I will explain :  With admin role, everythin is okay, the automatic lookup work and no warnings on my dashboard.  When... See more...
Hello evryone,  I have a probleme with automatic lookup and role users.  I will explain :  With admin role, everythin is okay, the automatic lookup work and no warnings on my dashboard.  When I use an another role, it doesn't work  I've checked all the rights for lookups and knowledge objects, assigning them to the role in question. But nothing works; I'm really out of ideas. Thank you very much for your help.  
Hi, I am trying to mask some passwords but I cannot figure out the proper props.conf (ha!) for it. It works on the fly but not when I try to set it in props.conf this is my mask on the fly, basic... See more...
Hi, I am trying to mask some passwords but I cannot figure out the proper props.conf (ha!) for it. It works on the fly but not when I try to set it in props.conf this is my mask on the fly, basically just replace the password with some characters: rex mode=sed field=ms_Mcs_AdmPwd "s/ms_Mcs_AdmPwd=(\w+)/###\2/g"\   and this is the raw data from sourcetype: ActiveDirectory Additional Details:                                   msLAPS-PasswordExpirationTime=133579223312233231                                   ms-Mcs-AdmPwd=RlT34@iw4dasdasd   How would I do this in props.conf or transform.conf ?   Oliver
Tje splunk readiness app, cannot determine if Mission Control app is python compatible
Currently, I have a search that returns the following: Search: index=index1 sourcetype=sourcetype1 | table host, software{} host                 software hostname       cpe:/a:vendor:product:... See more...
Currently, I have a search that returns the following: Search: index=index1 sourcetype=sourcetype1 | table host, software{} host                 software hostname       cpe:/a:vendor:product:version                             cpe:/a:vendor:product:version                             cpe:/a:vendor:product:version                             cpe:/a:vendor:product:version                             cpe:/a:vendor:product:version hostname       cpe:/a:vendor:product:version                             ...                             ... Here, there are multiple software tied to one hostname, and the software is under one field called software{}. What I am looking for is a way to split the software field into 3 fields by extracting the vendor, the product and the version into 3 separate fields to return: host                 software_vendor                   software_product             software_version hostname       vendor                                       product                                  version                             vendor                                       product                                  version                             vendor                                       product                                  version                             vendor                                       product                                  version                             vendor                                       product                                  version hostname       vendor                                       product                                  version                             ...                             ... Does anyone have any ideas?
Is there a way to use Splunk to find out if wireshark is installed on any of the systems? Is there a query for this
We are currently indexing big log files (~1 GB in size) in our Splunk indexer using Splunk Universal Forwarder. All the logs data will be stored in a single index. We want to make sure the logs dat... See more...
We are currently indexing big log files (~1 GB in size) in our Splunk indexer using Splunk Universal Forwarder. All the logs data will be stored in a single index. We want to make sure the logs data is deleted after one week from the date it was indexed.  Is there a way to achieve the same?
HI, I want to embed the dashboard in my own webpage. First, I found the "EDFS" APP, but after installing it and following the steps, I didn't see the "EDFS" option in the input. When this include in... See more...
HI, I want to embed the dashboard in my own webpage. First, I found the "EDFS" APP, but after installing it and following the steps, I didn't see the "EDFS" option in the input. When this include in the HTML file, it will responds with "refused to connect" :   <iframe src="https://127.0.0.1:9999" seamless frameborder="no" scrolling="no" width="1200" height="2500"></iframe>   Also, if add "trustedIP=127.0.0.1" in the server.conf file, when open Splunk web using "127.0.0.1:8000", it shows an "Unauthorized" error. Additionally, I found that adding "x_frame_options_sameorigin = 0" and "enable_insecure_login = true" in the web.conf file, and including this in the HTML file:   <iframe src="http://splunkhost/account/insecurelogin?username=viewonlyuser&password=viewonly&return_to=/app/search/dashboardname" seamless frameborder="no" scrolling="no" width="1200" height="2500"></iframe>   It will show the Splunk web login page, with the error message "No cookie support detected. Check your browser configuration." If try to login with the username and password, it still doesn't work and shows a "Server error" message. If use Firefox's incognito window open the HTML file, it will skip the login page and display "Unauthorized." Is there a way to solve these issues or alternative methods to display the dashboard on an external webpage? Thanks in advance.
We have recently migrated to smart store post migration SF and RF are not met. Can anyone help me with the  troubleshooting steps.
Hi Splunkers, I have a question about a possible issues on UF management via Deploymet Server. On a customer env, some UFs have been installed on Windows server. They send data to a dedicated HF. No... See more...
Hi Splunkers, I have a question about a possible issues on UF management via Deploymet Server. On a customer env, some UFs have been installed on Windows server. They send data to a dedicated HF. Now, we want manage them with a Deployment Server. The point is this: those UF have been installed with graphical wizard. During this installation, it has been set which data to collect and to send to HF. So, the inputs.conf has been set during this phase, in a GUI manner. Now, in a Splunk course material (I don't remeber one, it should be the Splunk Enterprise Admin one), I got this warning: if inputs.conf, for Windows UF, are set with graphical wizard like in our case, Deployment Server could get some problems in interact with them, even it could not be able to manage them. Is this confirmed? Do you know in which section on documentation I can find evidence of this?
Hello, first of all, sorry for my bad English, I hope you can understand everything. My goal is to get the journald logs from the universalforwarder in JSON format to Splunk. (Splunk/UF Version 9.1... See more...
Hello, first of all, sorry for my bad English, I hope you can understand everything. My goal is to get the journald logs from the universalforwarder in JSON format to Splunk. (Splunk/UF Version 9.1.2) I use the app jorunald_input. inputs.conf (UF)     [journald://sshd] index = test sourcetype = test journalctl-filter = _SYSTEMD_UNIT=sshd.service       I've tried different props.conf functions. For example, something like this:  props.conf (UF)     [test] INDEXED_EXTRACTIONS = json KV_MODE = json SHOULD_LINEMERGE=false #INDEXED_EXTRACTIONS =json #NO_BINARY_CHECK=true #AUTO_KV_JSON = true       On the UF I check with the command     ps aux | grep journalctl     whether the query is enabled. It displays this command     journalctl -f -o json --after-cursor s=a12345ab1abc12ab12345a01f1e920538;i=43a2c;b=c7efb124c33f43b0b0142ca0901ca8de;m=11aa0e450a21;t=233ae3422cd31;x=00af2c733a2cdfe7 _SYSTEMD_UNIT=sshd.service -q --output-fields PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE,MESSAGE     I can try it out by using this command in the cli But I have to take out that part "--after-cursor ...." So I run the following command on the CLI to keep track of the journald logs:     journalctl -f -o json _SYSTEMD_UNIT=sshd.service -q --output-fields PRIORITY,_SYSTEMD_UNIT,_SYSTEMD_CGROUP,_TRANSPORT,_PID,_UID,_MACHINE_ID,_GID,_COMM,_EXE,MESSAGE     On the Universal forwarder, the tracked journald logs will then look like this:  (It would be a nice JSON format)     { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2c;b=a1aaa111a11aaa111aa000a0101;m=11aa00c5b9a0;t=233ae39a37aa2;x=00af2c733a2cdfe7", "__REALTIME_TIMESTAMP" : "1710831664593570", "__MONOTONIC_TIMESTAMP" : "27194940570016", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "Invalid user asdf from 111.11.111.111 port 111", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2d;b=a1aaa111a11aaa111aa000a0101;m=11aa00ec25bf;t=233ae39c9e6c0;x=10ac2c735c2cdfe7", "__REALTIME_TIMESTAMP" : "1710831667111616", "__MONOTONIC_TIMESTAMP" : "27194943088063", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): check pass; user unknown", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2e;b=a1aaa111a11aaa111aa000a0101;m=11aa00ec278a;t=233ae39c9e88c;x=5fb4c21ae6130519", "__REALTIME_TIMESTAMP" : "1710831667112076", "__MONOTONIC_TIMESTAMP" : "27194943088522", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a2f;b=a1aaa111a11aaa111aa000a0101;m=11aa0108f5bf;t=233ae39e6b6c0;x=d072e90acf887129", "__REALTIME_TIMESTAMP" : "1710831668999872", "__MONOTONIC_TIMESTAMP" : "27194944976319", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a30;b=a1aaa111a11aaa111aa000a0101;m=11aa010e0295;t=233ae39ebc397;x=d1eb29e00003daa7", "__REALTIME_TIMESTAMP" : "1710831669330839", "__MONOTONIC_TIMESTAMP" : "27194945307285", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "pam_unix(sshd:auth): check pass; user unknown", "_PID" : "1430615" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a31;b=a1aaa111a11aaa111aa000a0101;m=11aa012f0b3c;t=233ae3a0ccc3e;x=c33e28a6111c89ea", "__REALTIME_TIMESTAMP" : "1710831671495742", "__MONOTONIC_TIMESTAMP" : "27194947472188", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a32;b=a1aaa111a11aaa111aa000a0101;m=11aa0135591b;t=233ae3a131a1d;x=45420f6d2ca07377", "__REALTIME_TIMESTAMP" : "1710831671908893", "__MONOTONIC_TIMESTAMP" : "27194947885339", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "PRIORITY" : "3", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "error: Received disconnect from 111.11.111.111 port 111:11: Unable to authenticate [preauth]" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a33;b=a1aaa111a11aaa111aa000a0101;m=11aa01355bee;t=233ae3a131cf0;x=15b1aa1201a45cdf", "__REALTIME_TIMESTAMP" : "1710831671909616", "__MONOTONIC_TIMESTAMP" : "27194947886062", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "PRIORITY" : "6", "_UID" : "0", "_MACHINE_ID" : "1111", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "_PID" : "1430615", "MESSAGE" : "Disconnected from invalid user asdf 111.11.111.111 port 111 [preauth]" } { "__CURSOR" : "s=a12345ab1abc12ab12345a01f1e920538;i=43a34;b=a1aaa111a11aaa111aa000a0101;m=11aa01355c42;t=233ae3a131d45;x=123f45a09e00a8a2", "__REALTIME_TIMESTAMP" : "1710831671909701", "__MONOTONIC_TIMESTAMP" : "27194947886146", "_BOOT_ID" : "a1aaa111a11aaa111aa000a0101", "_TRANSPORT" : "syslog", "_UID" : "0", "_MACHINE_ID" : "1111", "PRIORITY" : "5", "_GID" : "0", "_COMM" : "sshd", "_EXE" : "/usr/sbin/sshd", "_SYSTEMD_CGROUP" : "/system.slice/sshd.service", "_SYSTEMD_UNIT" : "sshd.service", "MESSAGE" : "PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111", "_PID" : "1430615" }      (Example)    But when I look for the logs on the search head, they look like this:     Invalid user asdf from 111.11.111.111 port 111pam_unix(sshd:auth): check pass; user unknownpam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111Failed password for invalid user asdf from 111.11.111.111 port 111 ssh2pam_unix(sshd:auth): check pass; user unknownFailed password for invalid user asdf from 111.11.111.111 port 111 ssh2error: Received disconnect from 111.11.111.111 port 111:11: Unable to authenticate [preauth]Disconnected from invalid user asdf 111.11.111.111 port 111 [preauth]PAM 1 more authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=111.11.111.111       Does anyone know why the logs are written together and not to be considered individually? And why the logs are not in JSON format? Can anyone tell me a solution for this on how to fix the problem?   Thank you very much!
In our dashboard, a user reported that she got "Search was cancelled" message when she used it. I came to know that multiple searches were run concurrently. Hence, it could have been cancelled. But ... See more...
In our dashboard, a user reported that she got "Search was cancelled" message when she used it. I came to know that multiple searches were run concurrently. Hence, it could have been cancelled. But when I tried to reproduce this issue, the searches kept getting queued and ran but was never delayed.  What are the scenario when a search is queued vs when it is cancelled?
Hi. How can I change the background color of pie dynamically through drop-down selection ? Is it okay to look like this in the picture below? <form theme="dark"> <label>Test2</label> <fieldset... See more...
Hi. How can I change the background color of pie dynamically through drop-down selection ? Is it okay to look like this in the picture below? <form theme="dark"> <label>Test2</label> <fieldset submitButton="false" autoRun="true"></fieldset> <row> <panel> <input type="dropdown" token="color_select"> <label>Background</label> <choice value="#175565">Color1</choice> <choice value="#475565">Color2</choice> </input> <chart> <search> <query>index=p1991_m_tiltline_index_json_raw | top vin.number</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.backgroundColor">$color_selectiono$</option> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.fontColor">#99CCFF</option> <option name="charting.foregroundColor">#EBF5FF</option> <option name="charting.legend.placement">right</option> <option name="charting.seriesColors">[0xEBF0F5,0xC2D1E0,0x99B2CC,0x7094B8,0x4775A3,0x2E5C8A,0x24476B,0x1A334C,0x0F1F2E,0x050A0F]</option> <option name="trellis.enabled">0</option> <option name="trellis.size">large</option> <option name="height">300</option> </chart> </panel>
We had a Splunk Enterprise installation (9.2.0.1) on Windows Server 2019, and upgraded to Windows Server 2022 today. Splunk is only set up  for local event log collection; events forwarded from othe... See more...
We had a Splunk Enterprise installation (9.2.0.1) on Windows Server 2019, and upgraded to Windows Server 2022 today. Splunk is only set up  for local event log collection; events forwarded from other workstations. The Windows subscription & forwarded events are working, but Splunk isn't ingesting newer logs since the inplace upgrade to Server 2022. I can't seem to access Splunk's Event Log Collection settings since the upgrade either, and am met with a "Permission error". I have restarted the server fully. Am tempted to re-install Splunk as well. Any ideas?   Edit: Running with free Splunk Enterprise license (<500MB / day ingestion). Service is run with separate domain user service account. Only used to ingest local event logs that have been forwarded from other workstations. Can't see any other configuration which has changed. inputs.conf [default] host = <servername> [WinEventLog://ForwardedEvents] disabled = false index = applocker renderXml = true blacklist = 111
Hello, I have been working on Splunk for a few months now, and we are using Splunk mainly for Cyber Security monitoring. I am wondering with regards to data model (CIM) should I create separate data... See more...
Hello, I have been working on Splunk for a few months now, and we are using Splunk mainly for Cyber Security monitoring. I am wondering with regards to data model (CIM) should I create separate data model for different zones, or should I combine all in a single data model. For example, I have 3 different independent network zones, DMZ, Zone A and Zone B. Each of those zones will have multiple indexes linked to it. Shall I actually use the default data model in CIM, eg datamodel=Authentication with all the indexes in DMZ, ZoneA and ZoneB, or should I make copies of datamodel? Scenario 1: If I use a common data model, I will use where source=xxx for example to try to split things out for my queries and dashboarding. Scenario 2: If I use a separate data model, I will have, datamodel= DMZ_Authentication, datamodel=ZoneA_Authentication and perhaps use append when I need to see the overall picture? Still confused on which is the best approach.
Hi Splunkers, I'm currently working on customizing the Splunk login screen to change the logo, background, footer, etc. I referred to the Splunk documentation (https://docs.splunk.com/Documentation/... See more...
Hi Splunkers, I'm currently working on customizing the Splunk login screen to change the logo, background, footer, etc. I referred to the Splunk documentation (https://docs.splunk.com/Documentation/Splunk/9.1.3/AdvancedDev/CustomizeLogin) and successfully completed the customization. Now, the Splunk login screen displays my logo, background image, and footer content. However, I encountered an issue when running Splunk App Inspect on the custom-login app I created. The Splunk App Inspect tool reported 8 failures: The web.conf file contains a [settings] stanza, which is not permitted. Only [endpoint:] and [expose:] stanzas are allowed in web.conf. Line Number: 1 The 'local' directory exists, which is not allowed. All configuration should be in the 'default' directory. The 'local.meta' file exists, which is not permitted. All metadata permissions should be set in 'default.meta'. No README file was found, which should include version support, system requirements, installation, configuration, troubleshooting, and running of the app, or a link to online documentation. I'm wondering if the [endpoint:*] and [expose:*] stanzas in web.conf are necessary for customizing the login screen. Are these stanzas required for login screen changes? All other issues have been fixed. (for production environment) Below is the corrected version of the custom_login app structure based on the Splunk App Inspect recommendations: ``` custom_login |-- default | |-- app.conf | |-- web.conf | |-- restmap.conf | |-- savedsearches.conf | |-- ui_prefs.conf | |-- README | |-- data |-- metadata | |-- default.meta |-- static | |-- appIcon.png | |-- appIcon_2x.png | |-- appIconAlt.png | |-- appIconAlt_2x.png | |-- background.png | |-- fav_logo.png |-- bin | |-- readme.txt |-- appserver | |-- static | |-- background.png | |-- fav_logo.png ``` Here are the contents of the configuration files: **app.conf** ``` [launcher] author = NAME description = <<<<<XXXXXXXXYYYYYY>>>>>>. version = Custom login 1.0 [install] is_configured = 0 [ui] is_visible = 1 label = Custom app login [triggers] reload.web = web.conf reload.restmap = restmap.conf reload.ui_prefs = ui_prefs.conf ``` **restmap.conf** ``` # restmap.conf for custom_login [endpoint:login_background] # REST endpoint for login background image configurations match = /custom_login ``` **ui_prefs.conf** ``` [settings] loginBackgroundImageOption = custom loginCustomBackgroundImage = custom_login:appserver/static/background.png login_content = This is a server managed by my team. For any inquiries, please reach out to us at YYYY.com loginCustomLogo = custom_login:appserver/static/fav_logo.png loginFooterOption = custom loginFooterText = © 2024 XXXXXXX ``` **web.conf** ``` [endpoint:login] [expose:login_background] pattern = /custom_login methods = GET ``` I am currently working in a development environment. Any advice on how to proceed with these changes would be appreciated. Thanks in advance.
Hello good folks,  I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table ... See more...
Hello good folks,  I've this requirement, where for a given time period, I need to send out an alert if a particular 'value' doesn't come up. This is to be identified by referring to a lookup table which has the list of all possible values that can occur in a given time period. The lookup table is of the below format Time Value Monday 14: [1300 - 1400] 412790 AA Monday 14: [1300 - 1400]   114556 BN Monday 15: [1400 - 1500] 243764 TY Based on this, in the live count , for the given time period ( let's take  Monday 14: [1300 - 1400] as an example ), if I do a  stats count as Value by Time and I don't get "114556 BN" as one of the values, an alert is to be generated. Where I'm stuck with is matching the time with the values. If I use inputlookup first, I am not able to pass the time from Master Time picker  which will not allow me to check for specific time frame ( in this case an hour ). If I use the index search first, I am able to match the time against the lookup by using | join type=left but I am not able to find the missing values which are not there in the live count but present in the lookup. Would appreciate if I could get some advice on how to go about this. Thanks in advance!