All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,   Our security team has had a need of a asset management tool to keep track of our hardware and software inventory with respect to our security processes and security controls. Our support... See more...
Hello,   Our security team has had a need of a asset management tool to keep track of our hardware and software inventory with respect to our security processes and security controls. Our support team already maintains a CMDB but it doesn't do a great job and provides almost no value as a master list or a way to audit for gaps in security control coverage.  Our team deploys a variety of tools that use agents or network discovery scans to give a partial list of asset inventories. When we do comparisons, none of them are complete enough not to have some variance from between different tools. We would like a CMDB that allows us to track our assets and our security control coverage. You cannot secure what you don't know about! One idea has been to grab asset information from all the tools using custom api input scripts and aggregate it into splunk into one kvstore table. Then we could use this table as a master list. We have the splunk deployment clients and the asset_discovery scan results, but we also have cloud delivered solutions for vuln mgmt, edr, av, mdm, etc.  I wanted to reach out to the community to see if anybody else has came across this use-case and if there are any resources anybody has to share or guidance to make this idea a reality. 
Running the SCMA app pre-migration checks in preparation for moving our environment to Cloud, we were notified of a number of old dashboards floating around using deprecated 'Advanced XML'. As most o... See more...
Running the SCMA app pre-migration checks in preparation for moving our environment to Cloud, we were notified of a number of old dashboards floating around using deprecated 'Advanced XML'. As most or all of these are no longer needed, I made the decision to delete these. However, it appears that the Search and Reporting app (where most of these dashboards reside) is not managed by our SHC deployer, and the old dashboards themselves cannot be deleted from the GUI settings > user interface > views. As shown below, most dashboards (top) have a Delete option, but none of the AXML dashboards allow this action.      Other than manually 'rm -rf'ing on the backend for all our search heads, is there another way I can easily delete these dashboards?
Two different sources returning data in the below format. Source 1 - Determines the time range for a given date based on the execution of a Job, which logically concludes the End of Day in Applicat... See more...
Two different sources returning data in the below format. Source 1 - Determines the time range for a given date based on the execution of a Job, which logically concludes the End of Day in Application. Source 2 – Events generated in real time for various use cases in the application. EventID1 is generated as part of the Job in Source1.   Source 1 DATE Start Time End Time Day 3 2023-09-12 01:12:12.123 2023-09-13 01:13:13.123 Day 2 2023-09-11 01:11:11.123 2023-09-12 01:12:12.123 Day 1 2023-09-10 01:10:10.123 2023-09-11 01:11:11.123   Source 2 Event type Time Others EventID2 2023-09-11 01:20:20.123   EventID1 2023-09-11 01:11:11.123   EventID9 2023-09-10 01:20:30.123   EventID3 2023-09-10 01:20:10.123   EventID5 2023-09-10 01:10:20.123   EventID1 2023-09-10 01:10:10.123     There are no common fields available to join the two sources other than the time at which the job is executed and at which the EventID1 is generated. Expectation is to logically group the events based on Date and derive the stats for each day. I'm new to Splunk and i would really appreciate if you guys can provide suggestions on how to handle this one.   Expected Result Date Events Count Day 1 EventID1 EventID2 EventID3 - - - EventID9 1 15 10 - - 8 Day 2 EventID1 EventID2 - - - EventID9 EventID11 1 2 - - 18 6              
  Hello, Has anyone experienced parsing Windows Event Logs using a KVstore for all of the generic verbiage?  For example - red text (general/static text associated with EventCode number and other ... See more...
  Hello, Has anyone experienced parsing Windows Event Logs using a KVstore for all of the generic verbiage?  For example - red text (general/static text associated with EventCode number and other values  - this will mostly be the Message/body fields) will be entered into a KVstore; green text (values within the event) will be indexed. In the below example, there are 2,150 characters, of which, 214 characters are dynamic, and need to be indexed. The red(undant) text contains over 1,930 characters.  These are just logon (4624) events. With over 11.5 million logon events per day across our environment, this is ~23 GB. If what I am asking can be/has been accomplished, we could reduce this to 2.3 GB. Thanks and God bless, Genesius   09/11/2023 12:00:00AM LogName=Security EventCode=4624 EventType=0 ComputerName=<computer name> SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=9696969696 Keywords=Audit Success TaskCategory=Logon OpCode=Info Message=An account was successfully logged on.   Subject:                 Security ID:                         NT AUTHORITY\SYSTEM                 Account Name:                 <account name>                 Account Domain:                              <account domain>                 Logon ID:                             0x000   Logon Information:                 Logon Type:                        3                 Restricted Admin Mode:               -                 Virtual Account:                No                 Elevated Token:                 Yes   Impersonation Level:                      Identification   New Logon:                 Security ID:                         <security id>                 Account Name:                 <account name>                 Account Domain:                              <account domain>                 Logon ID:                             0x0000000000                 Linked Logon ID:                               0x0                 Network Account Name:               -                 Network Account Domain:           -                 Logon GUID:                       <login guid>   Process Information:                 Process ID:                          0x000                 Process Name:                   D:\Program Files\Microsoft System Center\Operations Manager\Server\Microsoft.Mom.Sdk.ServiceHost.exe   Network Information:                 Workstation Name:         <workstation name>                 Source Network Address:             -                 Source Port:                        -   Detailed Authentication Information:                 Logon Process:                  <login process>                 Authentication Package: <authentication package>                 Transited Services:           -                 Package Name (NTLM only):        -                 Key Length:                         0   This event is generated when a logon session is created. It is generated on the computer that was accessed.   The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.   The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).   The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.   The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.   The impersonation level field indicates the extent to which a process in the logon session can impersonate.   The authentication information fields provide detailed information about this specific logon request.                 - Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.                 - Transited services indicate which intermediate services have participated in this logon request.                 - Package name indicates which sub-protocol was used among the NTLM protocols.                 - Key length indicates the length of the generated session key. This will be 0 if no session key was requested.
We have got a requirement where, event logs need to be indexed under a metrics index. For this we are using mcollect command to ingest during search time. For automatic conversion, we have schedul... See more...
We have got a requirement where, event logs need to be indexed under a metrics index. For this we are using mcollect command to ingest during search time. For automatic conversion, we have scheduled a search to run every 1 hour, but in case of missed schedule data is lost for that schedule.  What is the best way to store the last indexed time and run the schedule based on the last indexed time?     Note: In long run we will be using LogToMetrics sourcetype for conversion, Until then we need to use the mcollect command.
Hi All i ahve a lookup file .csv where i have timestamp Name and USEDGB values  i have been trying to run a time chart to see the total USEDGB per day Both lookup definition and lookup table file h... See more...
Hi All i ahve a lookup file .csv where i have timestamp Name and USEDGB values  i have been trying to run a time chart to see the total USEDGB per day Both lookup definition and lookup table file has app permissions  |inputlookup ABC_DISK_UTILIZATION.csv |eval  _time=Timestamp |timechart span=24h sum(USEDGB)   the result only shows time but no values of USEDGB can you please help  Timestamp NAME USEDGB 12/08/2023 22:04 RECO_A 48.61 12/08/2023 13:04 RECO_B 46.21 12/08/2023 03:04 RECO_C 46133.89 11/08/2023 20:01 RECO_A 164.11 11/08/2023 18:01 RECO_B 48.61 11/08/2023 16:01 RECO_C 46.21 10/08/2023 22:00 RECO_A 45327.22 10/08/2023 17:00 RECO_B 193.4 10/08/2023 08:00 RECO_C 48.61 09/08/2023 21:00 RECO_A 46.21 09/08/2023 13:00 RECO_B 45205.72 09/08/2023 06:00 RECO_C 132.57 08/08/2023 19:00 RECO_A 48.61 08/08/2023 12:00 RECO_B 46.21 08/08/2023 10:00 RECO_C 45203.77 07/08/2023 22:00 RECO_A 132.56 07/08/2023 14:00 RECO_B 48.61 07/08/2023 07:00 RECO_C 46.21 06/08/2023 22:04 RECO_A 45199.08 06/08/2023 13:04 RECO_B 123.85 06/08/2023 03:04 RECO_C 48.61 05/08/2023 20:01 RECO_A 46.21 05/08/2023 18:01 RECO_B 45196.12 05/08/2023 16:01 RECO_C 117.4        
Hi all, what file would I change if I wanted to customize the global banner? I'd like a dark color to match the splunk header as well as unbolded font. I am thinking this is a css/html solution -- po... See more...
Hi all, what file would I change if I wanted to customize the global banner? I'd like a dark color to match the splunk header as well as unbolded font. I am thinking this is a css/html solution -- possibly somewhere in mrsparkle directory? I went into the global-banner.conf and changed       global_banner.backgrond_color = black,       but as expected that didn't work. Thank you!
Hi @trashyroadz  Have opened a new thread for the issue I am facing. Current Splunk version - 8.2.3.3 While running a query in search page, getting error as "Unable to distribute to peer named <... See more...
Hi @trashyroadz  Have opened a new thread for the issue I am facing. Current Splunk version - 8.2.3.3 While running a query in search page, getting error as "Unable to distribute to peer named <idx name> at uri <splunk idx uri> because replication was unsuccessful. ReplicationStatus: Failed-Failure info: failed_because_BUNDLE_SIZE_RETRIEVAL_FAILURE". Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. Did not think of connectivity issue as in the message box got message saying bundle size exceeds limit. On checking, could see all apps in $SPLUNK_HOME/var/run/splunk/deploy in deployer even if we had changed a single file in a single app. As per my understanding only modified apps should be pushed to SHs and from SH captain to search peers. Please help on this. Let me know if any other detail is needed.
Hi Team,  I help manage the Terraform Cloud for Splunk app and after v2.37.0 of the AppInspect CLI was released we're seeing this test failure: FAILURE: check_for_updates property found i... See more...
Hi Team,  I help manage the Terraform Cloud for Splunk app and after v2.37.0 of the AppInspect CLI was released we're seeing this test failure: FAILURE: check_for_updates property found in [package] stanza is set to True for private app not uploaded to Splunkbase. It should be set to False for private apps not uploaded to Splunkbase. File: default/app.conf Line Number: 15   I find this message confusing, as the app is published in Splunkbase. I don't want to disable checks for updates unnecessarily.   Could I please get some help understanding how to address this error? Or help me figure out if I'm misunderstanding something? Thanks!  
In the monitoring console of the Splunk Cloud. If you click on Forwarders tab then the "Forwarders Deployment" option. Scrolling down a bit until you see "Status and Configuration - As of <timestamp>... See more...
In the monitoring console of the Splunk Cloud. If you click on Forwarders tab then the "Forwarders Deployment" option. Scrolling down a bit until you see "Status and Configuration - As of <timestamp>".   The timestamp that I see is outdate by 6 months from todays date. So I see the timestamp 19/03/2023 13:09:03+1:00. I am confused as if I type in an Instance in the box, I get data live from when the Instance was last connected to the indexer with the timestamps being in the <15mins ago for active forwarders.   What does it mean if your timestamp is wrong? What sort of fixes can i take to try and solve this?    
Hi All, Im looking for a way to share a non expiring search with other users. If we use the ''share job" option or just use the URL from address bar - it will get expired once the job expires. But I... See more...
Hi All, Im looking for a way to share a non expiring search with other users. If we use the ''share job" option or just use the URL from address bar - it will get expired once the job expires. But I want to share a link that will not expire. Of course in such a case the search needs to run again from the users end, but the time stamps and search query are the things I want to share with a link. Is there a way to do this?
Hello, I'm trying to add new/existing key indicator searches to my dashboard in ES, but the edit toolbar does not have the "Add Key Indicator" button. My custom dashboard: My custom dashboard D... See more...
Hello, I'm trying to add new/existing key indicator searches to my dashboard in ES, but the edit toolbar does not have the "Add Key Indicator" button. My custom dashboard: My custom dashboard Default dashboard with Key Indicators: Default dashboard with Key Indicator I also tried to clone the default "Email Activity" dashboard (which has existing key indicators in it), but the clone dashboard cannot be loaded. What should I do? If this is a bug, which log files do I need to check?   Thank you. 
Hello, I'm trying to add new/existing key indicator searches to my dashboard in ES, but the edit toolbar does not have the "Add Key Indicator" button. My custom dashboard: My custom dashboard... See more...
Hello, I'm trying to add new/existing key indicator searches to my dashboard in ES, but the edit toolbar does not have the "Add Key Indicator" button. My custom dashboard: My custom dashboard Default dashboard with Key Indicators: Default dashboard with Key Indicator I also tried to clone the default "Email Activity" dashboard (which has existing key indicators in it), but the clone dashboard cannot be loaded. What should I do? If this is a bug, which log files do I need to check?   Thank you. 
Hi, How we can find out  the HEC url for my splunk cloud instance ...
Hi, I'm trying to configure "custom Data Type" > SQS input in Splunk add-on for AWS app to onboard data from an AWS account. is it possible to create the SQS input using IAM role instead of account (... See more...
Hi, I'm trying to configure "custom Data Type" > SQS input in Splunk add-on for AWS app to onboard data from an AWS account. is it possible to create the SQS input using IAM role instead of account (for which I need KeyId and secret Key of the account)?  
How to convert GMT to JKT time in Splunk events by using query
Hi, I 'm trying to integrate the module of tanium using http with splunk  as i dnt see what exactly we need to add in the URL and also in the Headers splunk what we need to add, can anyone help me... See more...
Hi, I 'm trying to integrate the module of tanium using http with splunk  as i dnt see what exactly we need to add in the URL and also in the Headers splunk what we need to add, can anyone help me with the masked data ??    
Hi,  Splunk Assist is producing a lot of execution errors in my search head cluster and on my intermediate forwarder. Since the first is part of a cluster I thought of deploying the app.conf via shc... See more...
Hi,  Splunk Assist is producing a lot of execution errors in my search head cluster and on my intermediate forwarder. Since the first is part of a cluster I thought of deploying the app.conf via shc-d but that would not be enough since do doku say one must execute ./splunk disable app splunk_assist. Similar for heavy forwarders connected to a deployment server. Sadly the splunk doku on Splunk Assist doesnt seem to care that clustered environments exists. Kind Regards th
Why SSL status show as "false" despite configuring SSL. In Our environment we have enabled TLS configuration between forwarders and receivers. The connection is established and we could see data is c... See more...
Why SSL status show as "false" despite configuring SSL. In Our environment we have enabled TLS configuration between forwarders and receivers. The connection is established and we could see data is coming through secure TLS channel into splunk. I have validated manually as well using openssl client module and verification was successful with status ok. We could see for the hosts, SSL as false and it keeps changing at random times to True or False. And connection type is cookedSSL for the False host.   I have checked all the tcpoutputproc and tcpinputproc in splunkd logs, cannot find any errors related to SSL.   But found below WARN messages on one of the forwarders. Is this causing the problem ? Any leads on this.
Hi, In ITSI > Notable Event Aggregation Policies > Action Rules, "Run a script" can no longer be executed. The work that triggered the event to occur - Splunk Core Version Up (8.2.7 > 9.0.5.1) ... See more...
Hi, In ITSI > Notable Event Aggregation Policies > Action Rules, "Run a script" can no longer be executed. The work that triggered the event to occur - Splunk Core Version Up (8.2.7 > 9.0.5.1) Environment before the work - Splunk Core 8.2.7 - ITSI 4.11.6 - Configure Run a Script [File name] "patlite.sh RED" > Running enabled Post-work environment - Splunk Core 9.0.5.1 - ITSI 4.11.6 - Configure Run a Script [File name] "patlite.sh RED" > Not working Script Deployment Location /opt/splunk/etc/apps/SA-ITOA/bin/scripts/patlite.sh The ITSI version has not been changed, only the Splunk Core version change, but is there some configuration change that needs to be made?