All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, i have a file monitoring stanza on a universal forwarder where i filter using transforms.conf to only get logentries i need, because the server writes logentries of multiple business proce... See more...
Hi there, i have a file monitoring stanza on a universal forwarder where i filter using transforms.conf to only get logentries i need, because the server writes logentries of multiple business processes into the same logfile. Now i need entries of another process with different ACL in a different index from that logfile but in our QS cluster while the first datainput still ingests into our PROD cluster So i have my inputs.conf [monitor://<path_to_logfile>] disabled = 0 index = <dataspecific index 1> sourcetype = <dataspecific sourcetype 1> a props.conf [<dataspecific sourcetype 1>] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true TRUNCATE = 1500 TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 20 TIME_FORMAT = [%y/%m/%d %H:%M:%S] TRANSFORMS-set = setnull, setparsing and a transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = (<specific regex>) DEST_KEY = queue FORMAT = indexQueue As standalone Stanza i would need the new input like this, with its own setparsing transforms [monitor://<path_to_logfile>] disabled = 0 index = <dataspecific index 2> sourcetype = <dataspecific sourcetype 2> _TCP_ROUTING = qs_cluster   to be honest i could just create a second stanza thats a little different and still reads the same file, but i dont want two tailreader on the same file. What possibilities do i have? Thanks in advance
I have never been one to understand regex, however I need to extract everything after the first entry (#172...) into it's own field.  Let's call it manual_entry.  I'm getting tired of searching and r... See more...
I have never been one to understand regex, however I need to extract everything after the first entry (#172...) into it's own field.  Let's call it manual_entry.  I'm getting tired of searching and randomly trying things. #1724872356 exit #1724872357 exit #1724872463 cat .bashrc #1724872485 sudo cat /etc/profile.d/join-timestamp-history.sh #1724872512 exit #1724877740 firefox   manual_entry exit exit cat .bashrc sudo cat /etc/profile.d/join-timestamp-history.sh exit firefox    
Hello members, i'm struggling with something i have configured data inputs, and indexer name on the HF and makes the app pointing to Search Head & reporting, Also forwarded to logs from the other sy... See more...
Hello members, i'm struggling with something i have configured data inputs, and indexer name on the HF and makes the app pointing to Search Head & reporting, Also forwarded to logs from the other system as syslog data to Heavy forwarder  i have configured also the same index on HF at the cluster master and pushed that to all indexers but when i'm looking for that index in SH ( Search Head ) there is no result    can someone help me please ...   Thanks
Hello, Splunk db_connect is indexing only 10k events per hour at a time no matter what setting I configure in inputs. db connect version is 3.1.0 db connect db_inputs.conf is    [ABC] connection ... See more...
Hello, Splunk db_connect is indexing only 10k events per hour at a time no matter what setting I configure in inputs. db connect version is 3.1.0 db connect db_inputs.conf is    [ABC] connection = ABC_PROD disabled = 0 host = 1.1.1.1 index = test index_time_mode = dbColumn interval = 900 mode = rising query = SELECT *\ FROM "mytable"\ WHERE "ID" > ?\ ORDER BY "ID" ASC source = XYZ sourcetype = XYZ:lis input_timestamp_column_number = 28 query_timeout = 60 tail_rising_column_number = 1 max_rows = 10000000 fetch_size = 100000    when i run the query using dbxquery in splunk i do get more than 10k events. Also i tried max_rows = 0 which basically should ingest everything but its not working.   how can I ingest unlimited rows.
I'm working on a dashboard in which the user enters a list of hosts.  The issue I'm running into is they must add an asterisk to the host name or it isn't found in the search.  This what the SPL look... See more...
I'm working on a dashboard in which the user enters a list of hosts.  The issue I'm running into is they must add an asterisk to the host name or it isn't found in the search.  This what the SPL looks like.     index=os_* (`wineventlog_security` OR sourcetype=linux_secure) host IN ( host1*, host2*, host3*, host4*, host5*, host6*, host7*, host8* ) earliest=-7d@d | dedup host | eval sourcetype=if(sourcetype = "linux_secure", sourcetype, source) | fillnull value="" | table host, index, sourcetype, _raw     If there is no * then there are no results.  What I would like to be able to do is have them enter hostname, FQDN, and either upper or lower case and the SPL would change it to lower case, remove any FQDN parts, add the *, and then search.  So far I haven't come up with SPL that works.  Any thoughts? TIA, Joe
Hi, Please share the configuration documents on panorama side for integrating this app with Splunk SOAR
Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting... See more...
Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting information, and I do not know how to proceed with my ON PREMISE, UNPRIVILEGED, PRIMARY + WARM STANDBY CONFIGURATION (database is on the instances, not external): At the top of the documentation, it states: Unprivileged Splunk SOAR (On-premises) running a release earlier than release 6.2.1 can be upgraded to Splunk SOAR (On-premises) release 6.2.1, and then to release 6.2.2. It says CAN BE. So.... is it optional? All deployments must upgrade to Splunk SOAR (On-premises) 6.2.1 before upgrading to higher releases in order to upgrade the PostgreSQL database. It says MUST UPGRADE. So.... is it mandatory? But then, towards the BOTTOM of the table, I'm looking at the row beginning with the entry stating that I am starting with version "6.2.0" Steps 1 & 2 are conditionals for clustered and external PostGreSQL databases. Step 3 goes directly to upgrading to 6.2.2. So..... Do I, or do I NOT, upgrade to 6.2.1 first? 
Howdy, Im fairly new to splunk and couldnt google the answer I wanted to Here we go.  I am trying to simplify my queries and filter down the search results better. Current example query:    ind... See more...
Howdy, Im fairly new to splunk and couldnt google the answer I wanted to Here we go.  I am trying to simplify my queries and filter down the search results better. Current example query:    index=myindex | search (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR EventID=3 OR EventID=11) OR (Processes="*del*.exe" OR Processes="*rm*.exe" OR Processes="*rmdir*.exe") process!="C:\\Windows\\System32\\svchost.exe" process!="C:\\Program Files\\Microsoft Advanced Threat Analytics\\Gateway\\Microsoft.Tri.Gateway.exe" process!="C:\\Program Files\\Common Files\\McAfee\\*" process!="C:\\Program Files\\McAfee*" process!="C:\\Windows\\System32\\enstart64.exe" process!="C:\\Windows\\System32\\wbem\\WmiPrvSE.exe" process!="C:\\Program Files\\Windows\\Audio\\EndPoint\\3668cba\\cc\\x64\\AudioManSrv.exe" | table _time, source, subject, object_file_path, SubjectUserName, process, result   This is an just an example, I do this same way for multiple different fields, indexs  I know its not the most efficient way of doing it but I dont know any better ways. Usually Ill start broad and whittle down the things I know I'm not looking for.  Is there either a way to simplify this (I could possibly do regex but im not really good at that) or something else like this to make my life easier? such as combining all the results I want to filter for one field. Any and all help/criticism is appreciated.
Hi. Running 9.0.6 and a user (who is the owner)  can schedule REPORTS, but not DASHBOARDS. It's a CLASSIC dashboard (not the new fancy one Stooooodio one). Dashboards --> Find Dashboard --> Edit... See more...
Hi. Running 9.0.6 and a user (who is the owner)  can schedule REPORTS, but not DASHBOARDS. It's a CLASSIC dashboard (not the new fancy one Stooooodio one). Dashboards --> Find Dashboard --> Edit button --> NO 'Edit Schedule' Open dashboard, top right export, NO 'Schedule PDF' My local admin says 'maybe they changed something in 9.0.6), but I'm unconvinced until this legendary community agrees. "feels" like a permission missing is all.    
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: x... See more...
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: xxxxx  Crestron Package Environment version :1.00.00.004 Crestron Package Firmware version :1.17.00.040 Crestron Package Flex-Hub version :1.3.0127.00204 Crestron Package HD-CONV-USB-200 version :009.051    I want extract only  : Crestron Package Firmware version :xx.xx.xxx  I wrote a query like bleow , but not working , pls help index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw ":\s+(?<CCSFirmware>.*?)$" | eval Time(utc)=strftime(_time, "%y-%m-%d %H:%M:%S") | table host Time(utc) CCSFirmware  
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server ... See more...
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server running Linux.  Recently we've seen crashes on the Windows servers which seem to be because Splunk-MonitorNoHandle is taking more and more RAM until there is none left. I have therefore limited the RAM that Splunk can take to stop the crashing. However, I need to understand the root cause.  It seems to me that the reason is because the HF is blocking the connection for some reason, and when that happens the Windows server has to cache the entries in memory. Once the connection is blocked, it never seems to unblock and the backlog just keeps getting bigger and bigger.  Here is an example from the log: 08-21-2024 16:42:13.223 +0100 WARN TcpOutputProc [6844 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=splunkhf02.mydomain.net inside output group default-autolb-group from host_src=WINDOWS02 has been blocked for blocked_seconds=54300. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. I tried setting maxKBps to 0 in limits.conf on the Windows server, I also tried 256 and 512 but we're still having the same problems. If I restart the Splunk service it 'solves' the issue but of course it also loses all of the log entries from the buffer in RAM.  Can anyone help me to understand the process here? Is the traffic being blocked by a setting on the HF? If so, then where could I find it to modify it? Or is it something on the Windows server itself? Thanks for any assistance!
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_... See more...
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_AccountID= if(ID="99", [search | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID, AccountType as To_Account], [search | lookup Payroll2.csv PARENTACCOUNT, ID as PARENTID OUTPUT TYPE as To_AccountID, AccountType as To_Account]) What is the best way to code something like this??? 
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appe... See more...
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appear on other days. The CSV file does not contain a header, and the format of the file is the same every day (each day starts with an empty file that is populated later). Here is the props.conf configuration that I’m using:     [csv_hff] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = FIELD_NAMES = heure,id,num,id2,id3 INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = heure TIME_FORMAT = %d/%m/%Y %H:%M:%S category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true     Has anyone else encountered the same problem? Splunk version 9 Thank you
Running queries on really large sets of data, and sending the output to an outputlookup works well for weekly refreshed dashboards. Is there a way to have some numbers from the initial report go into... See more...
Running queries on really large sets of data, and sending the output to an outputlookup works well for weekly refreshed dashboards. Is there a way to have some numbers from the initial report go into a separate, second outputlookup for monthly tracking?  For example a weekly report or dashboard shows me details on a daily basis, and the weekly summary - great.  Now the weekly summary should go additionally to a separate file for the monthly view. Is there a way to 'tee' results to different outputlookups? 
Good day, I have a query that I would like to add more information onto. The query pulls all users that accessed a AI site and gives my data for weekdays as a 1 or 0 if the site was accessed. The que... See more...
Good day, I have a query that I would like to add more information onto. The query pulls all users that accessed a AI site and gives my data for weekdays as a 1 or 0 if the site was accessed. The query 1 gets a user from index db_it_network and I would like to add the department of each user by querying theindex=collect_identities sourcetype=ldap:query The users are displayed in the collect identities index as 'email' and their department in the bunit field    index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | where date_wday="monday" OR date_wday="tuesday" OR date_wday="wednesday" OR date_wday="thursday" OR date_wday="friday" | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, date_wday | stats count by user app date_wday | chart count by user app | sort app 0      Note: the |stats | chart is necessary to distinct so that one user return results for one app per day
I can get a numeric table aligned to the left in the statistics field with the  | eval count=printf("%-10d",<your_field>)  However the alignment does not translate to the dashboard.     Any insight... See more...
I can get a numeric table aligned to the left in the statistics field with the  | eval count=printf("%-10d",<your_field>)  However the alignment does not translate to the dashboard.     Any insight on why this does work or if there is another way to align numeric results to the right on a dashboard for aesthetic purposes?
Hello, Need an urgent help. I am using REST API Modular input and the problem is i am not able to set the parameter for event breaking, below is the sample log. { "User" : [ { "record_id" : "2", "... See more...
Hello, Need an urgent help. I am using REST API Modular input and the problem is i am not able to set the parameter for event breaking, below is the sample log. { "User" : [ { "record_id" : "2", "email_address" : "dsfsdf@dfdf.net", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-23T05:28:43.091+00:00", "user_id" : "54216542", "username" : "Audit.Test1", "suspended" : false, "person_id" : "", "credentials_email_sent" : "", "user_guid" : "21SD6F546S2SD5F46", "user_creation_date" : "2024-08-23T05:28:42.000+00:00", "user_last_update_date" : "2024-08-23T05:28:44.000+00:00" }, { "record_id" : "3", "email_address" : "XDCFSD@dfdf.net", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-28T06:42:43.736+00:00", "user_id" : "300000019394603", "username" : "Assessment.Integration", "suspended" : false, "person_id" : "", "credentials_email_sent" : "", "user_guid" : "21SD6F546S2SD5F46545SDS45S", "user_creation_date" : "2024-08-28T06:42:43.000+00:00", "user_last_update_date" : "2024-08-28T06:42:47.000+00:00" }, { "record_id" : "1", "email_address" : "dfds@dfwsfe.com", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-06T13:27:34.085+00:00", "user_id" : "5612156498213", "username" : "dfsv", "suspended" : false, "person_id" : "56121564963", "credentials_email_sent" : "", "user_guid" : "D564FSD2F8WEGV216S", "user_creation_date" : "2024-08-06T13:29:00.000+00:00", "user_last_update_date" : "2024-08-06T13:29:47.224+00:00" } ]}
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Col... See more...
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Column C cell, the user should be redirected to a URL. How can we achieve this Linking of dashboard and URL on the same table? based on the column clicked.
Can I migrate the Splunk Enterprise server from virtual machine to physical server?
Hi. Im trying to monitor MSK metrics by CloudWatch input.  There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as  `[{}]`.  But i can't get any metric from Cloudwat... See more...
Hi. Im trying to monitor MSK metrics by CloudWatch input.  There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as  `[{}]`.  But i can't get any metric from Cloudwatch Input. Please help me!    im using Add-on version 7.0.0