All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have h... See more...
Right now I have an issue with duplicate notables. I want to make it so a notable will only re-generate if there have been new events that have added on to its risk score, not if no new events have happened and its risk score has remained the same. I have tried adjusting our base correlation search's throttling to throttle by risk object over every 7 days, because our correlation search goes back over the last 7 day's worth of alerts to determine whether or not to trigger a notable.  Which brings me to this question: do the underlying alerts (i.e., the alerts that contribute to generating a risk score which ultimately determines if a risk object is generated or not) also need to be throttled for the past 7 days? Right now the throttling settings for those alerts are set to throttle by username over the past 1 day. 
Hello, I have these two events that are part of a transaction. These have the same s and qid. I need to match s and qid of these two and insert a field equal to hdr_mid from the second event into f... See more...
Hello, I have these two events that are part of a transaction. These have the same s and qid. I need to match s and qid of these two and insert a field equal to hdr_mid from the second event into first event. Is this possible? In final stats I group events by hdr_mid and qid so I need hdr_mid value present in first event if I want to extract all recipients email addresses.  To do so I need to pull rcpts from first event and not  the second. How would I do that? Oct 24 13:46:56 hostname.company.com 2024-10-24T18:46:56.426217+00:00 hostname filter_instance1[31332]: rprt s=42cu1tr3wx m=1 x=42cu1tr3wx-1 cmd=send profile=mail qid=49O9Yi2a005119 rcpts=1@company.com,2@company.com,3@company.com...52@company.com Oct 24 13:46:56 hostname.company.com 2024-10-24T18:46:56.426568+00:00 hostname filter_instance1[31332]: rprt s=42cu1tr3wx m=1 x=42cu1tr3wx-1 mod=mail cmd=msg module= rule= action=continue attachments=0 rcpts=52 routes=allow_relay,default_inbound,internalnet size=4416 guid=Rze4pxSO_BZ4kUYS0OtXqLZjW3uHSx8d hdr_mid=<103502694.595.1729795616099.JavaMail.psoft@xyz123> qid=49O9Yi2a005119 hops-ip=x.x.x.x subject="Message subject" duration=0.271 elapsed=0.325
Hi guys, I have a set of data in the following format: This is a manually exported list, and my requirements are as follows: - Objective: I need to identify hosts that haven't connected to the... See more...
Hi guys, I have a set of data in the following format: This is a manually exported list, and my requirements are as follows: - Objective: I need to identify hosts that haven't connected to the server for a long time and track the daily changes in these numbers. - Method: Since I need daily statistics, I must perform the import action daily. However, without any configuration changes, Splunk defaults to using "Last Communicaiton" as "_time", which is not what I want. I need "_time" to reflect the date of the import. This way, I can track changes in the count of "Last " records within each day's imported data. I can't use folder or file monitoring for this because it only adds new data, so my only options are to use oneshot or to perform the import via the Web interface. Is my approach correct? If not, what other methods could be used to handle this?   I could use splunk oneshot to upload the file to the Splunk indexer, but I couldn't adjust the date to the import day or specific day.   The example I used the command:   splunk add oneshot D:\upload.csv -index indexdemo     I want the job will run automatically. So I don't want to change any content to the file. How could I do?  
Hello, I have configured an index inside an indexer and when i try to fetch data from that index in search head not getting any data. when i search that same index in indexer i could get the data... See more...
Hello, I have configured an index inside an indexer and when i try to fetch data from that index in search head not getting any data. when i search that same index in indexer i could get the data from the index but not from search head. Could you please assist what configuration needs to be checked on my search head and indexer ? Note - it's not clustered setup.   Thanks  
Hi All, We Are using earliest and latest commands in splunk test environment search and those are working fine but in production environment earliest and latest commands are not working in SPL query... See more...
Hi All, We Are using earliest and latest commands in splunk test environment search and those are working fine but in production environment earliest and latest commands are not working in SPL query due to some reason. Can you please help me with alternative commands for those commands and provide the solution to fix this issue why earliest and latest commands are not working in production environment.   Thanks, Srinivasulu S
How to extract fields from below source. /audit/logs/QTEST/qtestw-core_server4-core_server4.log I need extract QTEST as environment qtestw as hostname core_server4 as component core_server4.log as ... See more...
How to extract fields from below source. /audit/logs/QTEST/qtestw-core_server4-core_server4.log I need extract QTEST as environment qtestw as hostname core_server4 as component core_server4.log as filename
Hello, we would like to filter ES incident review and hide notables with TEST keyword by example, how to do? Thanks for your help
Hi. I do not understand well the SHC config, [raft_statemachine] disabled = <boolean> * Set to true to disable the raft statemachine. * This feature requires search head clustering to be enabled. ... See more...
Hi. I do not understand well the SHC config, [raft_statemachine] disabled = <boolean> * Set to true to disable the raft statemachine. * This feature requires search head clustering to be enabled. * Any consensus replication among search heads uses this feature. * Default: true replicate_search_peers = <boolean> * Add/remove search-server request is applied on all members of a search head cluster, when this value to set to true. * Requires a healthy search head cluster with a captain.  What changes in a SHC by setting "disabled = true or false"? By default is true. "replicate_search_peers = true" works only if disabled is false.   What does setting this to true or false do to the cluster?
I’m experiencing slow performance with my Splunk queries, especially when working with large datasets. What are some best practices or techniques I can use to optimize my searches and improve respons... See more...
I’m experiencing slow performance with my Splunk queries, especially when working with large datasets. What are some best practices or techniques I can use to optimize my searches and improve response times? Are there specific commands or settings I should focus on?
I need to replace the variables in the field rule_title field that is generated when using the `notable` macro.  I was able to get this search to work but it only works when I table the spec... See more...
I need to replace the variables in the field rule_title field that is generated when using the `notable` macro.  I was able to get this search to work but it only works when I table the specific variable fields. Is there a way I can do that but for all title regardless of title and variable fields?     
Hello how can I display only 1 value of these 3 "maxCapacitMachine" results (which are the same in all 3 cases) in a BY timechart?  
Hello, I'm having a hard time trying to find what data source events from a search are originating from, the Search is: source="/var/www/html/PIM/var/log/webservices/*" I've looked thru the "Files... See more...
Hello, I'm having a hard time trying to find what data source events from a search are originating from, the Search is: source="/var/www/html/PIM/var/log/webservices/*" I've looked thru the "Files % Directories" (Which I thought I would find it in there) and the rest of the Data Inputs, but can't seem to locate it anywhere. A side question   I tried creating a new Files % Directories Data Input by putting the full Linux path like below: //HostName/var/www/html/PIM/var/log/webservices/* But It says Path can't be empty.  I'm sure this is probably not how you format a Linux path, just couldn't find what I'm doing wrong. Thanks for any help at all, Newb      
Hi, My enterprise is using Mothership 2.0 and recently, mothership seemed to continue its collection of data, but a few are not uploading to their respective indexes and we are having trouble gettin... See more...
Hi, My enterprise is using Mothership 2.0 and recently, mothership seemed to continue its collection of data, but a few are not uploading to their respective indexes and we are having trouble getting it to work.
Some years ago I've created a (beautiful!) dashboard, with multiple panels, which presented related data at different angles. Some upgrades of the Splunk-server later (currently using Splunk Enterpri... See more...
Some years ago I've created a (beautiful!) dashboard, with multiple panels, which presented related data at different angles. Some upgrades of the Splunk-server later (currently using Splunk Enterprise 9.1.5), all of the panels -- except for the one, that shows the raw results of the base search -- stopped working... The common base-search is defined as:   <form version="1.1" theme="dark"> <label>Curve Calibration Problems</label> <search id="common"> <query>index=$mnemonic$ AND sourcetype="FOO" ... | eval Curve=replace(Description, ".* curve ([^\(]+) \(.*", "\1") </query> <earliest>$range.earliest$</earliest> <latest>$range.latest$</latest> </search>    And then the panels add to it like this, for one example:   <panel> <title>Graph of count of errors for $mnemonic$</title> <chart> <search base="common"> <query>top limit=50 Curve</query> </search> ...   Note, how the base search's ID is "common", which is exactly the value referred to as base. Again, the base search itself works correctly. But, when I attempt to edit the panel now, the search-expression is shown only as just that query, that used to be added to the base: If I click on the "Run Search" link in the above window, I see, that, indeed, only that expression is searched for, predictably yielding no results. It seems like something has changed in Splunk, how do I restore this dashboard to working order?
Hello Smarties... Can someone offer some assistance; We recently started ingesting Salesforce into Splunk, Username are coming in as ID's (00000149345543qba), instead of Jane Doe. So was told to us... See more...
Hello Smarties... Can someone offer some assistance; We recently started ingesting Salesforce into Splunk, Username are coming in as ID's (00000149345543qba), instead of Jane Doe. So was told to use the Join to get the Usernames or Names, and add to the sourcetype I need "joined" with;  So I am trying to get the "Login As"  events which is under the sourcetype="sfdc:setupaudittrail" - how do I get the Login As events with usernames, if usernames are under the user index and the login as events are under the setupaudittrail sourcetype? Here is my attempted search which doesn't come up with anything; But I know the events exist...   index=salesforce sourcetype="sfdc:user" | join type=outer UserAccountId [search index=salesforce sourcetype="sfdc:setupaudittrail" Action=suOrgAdminLogin]
Hello Everyone, Having a hard time finding the appropriate way to display data. I have duplicate data where one field is unique. I would like to dedup but leaving one instance of the unique value.  ... See more...
Hello Everyone, Having a hard time finding the appropriate way to display data. I have duplicate data where one field is unique. I would like to dedup but leaving one instance of the unique value.  Example of what I want to dedup: field1 field2 field3 field4 a b c d a b c e a b c f   Example of what I would like to see: field1 field2 field3 field4 a b c d   Any help would be greatly appreciated. Regards.
Hello good afternoon, does anyone know how to integrate the Adabas database into Splunk and where I can download the jdbc drivers for Splunk DB Connect?
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:... See more...
Hello, I am using Splunk Enterprise. From MS site I registred app with premission to Security Alert and Security Incidents, also added account in Splunk. Imposible to add input there is error below:    
Dear all,  Requesting your support in achieving the below.  I have a method which has a custom object as a parameter. to run getter chain. I first need to cast the parameter into a Object Array and... See more...
Dear all,  Requesting your support in achieving the below.  I have a method which has a custom object as a parameter. to run getter chain. I first need to cast the parameter into a Object Array and then Cast it into my POJO Class and then run the getter on it. How this can be Achieved.  My Code Snippet Below.  ClasssName: com.mj.common.mjServiceExecute Method: execute(com.mj.mjapi.mjmessage) public abstract class mjServiceExecute implements mjServiceExecuteintf { public mjmessage execute(mjmessage paramMjMessage) { mjmessage mjmjmesg = null; try { Object[] arrayOfObject = (Object[])paramMjMessage.getPayload(); MjHeaderVO mjhdrvo = (MjHeaderVO)arrayOfObject[0]; String str1 = mjhdrvo.getName(); } }  I want to extract the value of str1 to split the business transaction. Requesting your assistance.
I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand th... See more...
I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand the imperfections of dual-forwarding and possible data loss etc.) They need to rename the destination indexes in the new environment dropping a prefix we can call 'ABC', I believe the easiest way is to approach this via INGEST_EVAL on the new Indexes. There are approx 20x indexes to rename example: ABC_linux ABC_cisco     transforms.conf (located on the NEW Indexers) [index_remap_A] INGEST_EVAL = index="value"     I have read the spec file in transforms.conf for 9.3.1 and a 2020 .conf presentation but I am unable to find great examples. Has anyone taken this approach? as it is only a low volume of remaps it may be best to statically approach this.