All Topics

Top

All Topics

Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting... See more...
Hello, for this question, I am referencing the documentation page: https://docs.splunk.com/Documentation/SOARonprem/6.2.2/Install/UpgradePathForUnprivilegedInstalls There are two sets of conflicting information, and I do not know how to proceed with my ON PREMISE, UNPRIVILEGED, PRIMARY + WARM STANDBY CONFIGURATION (database is on the instances, not external): At the top of the documentation, it states: Unprivileged Splunk SOAR (On-premises) running a release earlier than release 6.2.1 can be upgraded to Splunk SOAR (On-premises) release 6.2.1, and then to release 6.2.2. It says CAN BE. So.... is it optional? All deployments must upgrade to Splunk SOAR (On-premises) 6.2.1 before upgrading to higher releases in order to upgrade the PostgreSQL database. It says MUST UPGRADE. So.... is it mandatory? But then, towards the BOTTOM of the table, I'm looking at the row beginning with the entry stating that I am starting with version "6.2.0" Steps 1 & 2 are conditionals for clustered and external PostGreSQL databases. Step 3 goes directly to upgrading to 6.2.2. So..... Do I, or do I NOT, upgrade to 6.2.1 first? 
Howdy, Im fairly new to splunk and couldnt google the answer I wanted to Here we go.  I am trying to simplify my queries and filter down the search results better. Current example query:    ind... See more...
Howdy, Im fairly new to splunk and couldnt google the answer I wanted to Here we go.  I am trying to simplify my queries and filter down the search results better. Current example query:    index=myindex | search (EventCode=4663 OR EventCode=4660) OR (EventID=2 OR EventID=3 OR EventID=11) OR (Processes="*del*.exe" OR Processes="*rm*.exe" OR Processes="*rmdir*.exe") process!="C:\\Windows\\System32\\svchost.exe" process!="C:\\Program Files\\Microsoft Advanced Threat Analytics\\Gateway\\Microsoft.Tri.Gateway.exe" process!="C:\\Program Files\\Common Files\\McAfee\\*" process!="C:\\Program Files\\McAfee*" process!="C:\\Windows\\System32\\enstart64.exe" process!="C:\\Windows\\System32\\wbem\\WmiPrvSE.exe" process!="C:\\Program Files\\Windows\\Audio\\EndPoint\\3668cba\\cc\\x64\\AudioManSrv.exe" | table _time, source, subject, object_file_path, SubjectUserName, process, result   This is an just an example, I do this same way for multiple different fields, indexs  I know its not the most efficient way of doing it but I dont know any better ways. Usually Ill start broad and whittle down the things I know I'm not looking for.  Is there either a way to simplify this (I could possibly do regex but im not really good at that) or something else like this to make my life easier? such as combining all the results I want to filter for one field. Any and all help/criticism is appreciated.
Hi. Running 9.0.6 and a user (who is the owner)  can schedule REPORTS, but not DASHBOARDS. It's a CLASSIC dashboard (not the new fancy one Stooooodio one). Dashboards --> Find Dashboard --> Edit... See more...
Hi. Running 9.0.6 and a user (who is the owner)  can schedule REPORTS, but not DASHBOARDS. It's a CLASSIC dashboard (not the new fancy one Stooooodio one). Dashboards --> Find Dashboard --> Edit button --> NO 'Edit Schedule' Open dashboard, top right export, NO 'Schedule PDF' My local admin says 'maybe they changed something in 9.0.6), but I'm unconvinced until this legendary community agrees. "feels" like a permission missing is all.    
This is the third blog in our Splunk Love series. Check out our first one: "Describe Splunk in One Word," and our second one: "Aha! Moment with Splunk!" At Splunk, we believe in the power of c... See more...
This is the third blog in our Splunk Love series. Check out our first one: "Describe Splunk in One Word," and our second one: "Aha! Moment with Splunk!" At Splunk, we believe in the power of connecting and sharing knowledge. Through our Splunk Love campaign, we asked participants for their best advice to other Splunk users, and would love these treasure troves of insights to help you make the most of the Splunk.  Let’s hear what people have to say…. Leverage Community and Collaboration We know that Splunk can be challenging. Many of our participants mentioned that as well. Many of them shout out to Splunk’s supportive community that provides solutions and inspires new ideas. Engaging with this community through forums, Slack channels, or local user groups was a part of that common piece of advice, too. Don’t hesitate to ask questions and share your experiences, as the collective knowledge can be deeply beneficial.  Utilize Splunk's Versatility and Capabilities Splunk is more than a single solution; its capabilities extend across numerous applications, providing versatile tools that support diverse business needs. Our participants shared that Splunk has helped them achieve professional goals, from early career enablement to advanced use cases. They encourage Splunk users to experiment with different capabilities to discover Splunk's full potential.  Dream Big and don’t set up the limit  The other main theme of the advice is to embrace the mindset of dreaming big with Splunk’s wide-ranging capabilities, and not be afraid to start exploring. The best way to learn is by diving in and leveraging Splunk to its fullest potential. We hope these insights from fellow Splunk users inspire you to maximize your use of the platform. Remember, we are always here to support you every step of the way. Join the Conversation If you have further feedback and suggestions, please visit Splunk VOC to share your voice and ideas, join customer advisory boards and product preview programs. Your feedback is invaluable to us as we strive to provide the best experience for everyone. Happy Splunking! Team Splunk 
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: x... See more...
Below is my raw log   [08/28/2024 08:14:50] Current Device Info ... ****************************************************************************** Current Mode: Skull Teams Current Device name: xxxxx  Crestron Package Environment version :1.00.00.004 Crestron Package Firmware version :1.17.00.040 Crestron Package Flex-Hub version :1.3.0127.00204 Crestron Package HD-CONV-USB-200 version :009.051    I want extract only  : Crestron Package Firmware version :xx.xx.xxx  I wrote a query like bleow , but not working , pls help index=123 sourcetype = teams | search "Crestron Package Firmware version :" | rex field=_raw ":\s+(?<CCSFirmware>.*?)$" | eval Time(utc)=strftime(_time, "%y-%m-%d %H:%M:%S") | table host Time(utc) CCSFirmware  
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server ... See more...
Hi all, hoping someone can help me.  We have a number of Windows servers with the Universal Forwarder installed (9.3.0) and they are configured to forward logs to an internal heavy forwarder server running Linux.  Recently we've seen crashes on the Windows servers which seem to be because Splunk-MonitorNoHandle is taking more and more RAM until there is none left. I have therefore limited the RAM that Splunk can take to stop the crashing. However, I need to understand the root cause.  It seems to me that the reason is because the HF is blocking the connection for some reason, and when that happens the Windows server has to cache the entries in memory. Once the connection is blocked, it never seems to unblock and the backlog just keeps getting bigger and bigger.  Here is an example from the log: 08-21-2024 16:42:13.223 +0100 WARN TcpOutputProc [6844 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=splunkhf02.mydomain.net inside output group default-autolb-group from host_src=WINDOWS02 has been blocked for blocked_seconds=54300. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. I tried setting maxKBps to 0 in limits.conf on the Windows server, I also tried 256 and 512 but we're still having the same problems. If I restart the Splunk service it 'solves' the issue but of course it also loses all of the log entries from the buffer in RAM.  Can anyone help me to understand the process here? Is the traffic being blocked by a setting on the HF? If so, then where could I find it to modify it? Or is it something on the Windows server itself? Thanks for any assistance!
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_... See more...
When I search I want something like this: if(ID =99): then lookup 1, else: lookup 2. What I have right now is something like this, but I done know how to put it in the correct syntax:  | eval To_AccountID= if(ID="99", [search | lookup Payroll1.csv PARENTACCOUNT OUTPUT Product_Type as To_AccountID, AccountType as To_Account], [search | lookup Payroll2.csv PARENTACCOUNT, ID as PARENTID OUTPUT TYPE as To_AccountID, AccountType as To_Account]) What is the best way to code something like this??? 
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appe... See more...
Hello, I have a CSV file that I monitor via the Universal Forwarder (UF). I’m encountering an issue where sometimes I cannot find the fields in Splunk when i run index=myindex, even though they appear on other days. The CSV file does not contain a header, and the format of the file is the same every day (each day starts with an empty file that is populated later). Here is the props.conf configuration that I’m using:     [csv_hff] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = FIELD_NAMES = heure,id,num,id2,id3 INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = heure TIME_FORMAT = %d/%m/%Y %H:%M:%S category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true     Has anyone else encountered the same problem? Splunk version 9 Thank you
  Watch On-Demand   We’ll explore the exciting new features in Splunk Enterprise 9.3. Whether you're focused on security, IT operations, or just looking to get more out of your Splunk environme... See more...
  Watch On-Demand   We’ll explore the exciting new features in Splunk Enterprise 9.3. Whether you're focused on security, IT operations, or just looking to get more out of your Splunk environment, this session is packed with practical tips and insights. We’ll show you how these updates can help you stay ahead in a fast-changing digital world by improving performance, simplifying management, and making your Splunk experience more personalized and user-friendly.
Running queries on really large sets of data, and sending the output to an outputlookup works well for weekly refreshed dashboards. Is there a way to have some numbers from the initial report go into... See more...
Running queries on really large sets of data, and sending the output to an outputlookup works well for weekly refreshed dashboards. Is there a way to have some numbers from the initial report go into a separate, second outputlookup for monthly tracking?  For example a weekly report or dashboard shows me details on a daily basis, and the weekly summary - great.  Now the weekly summary should go additionally to a separate file for the monthly view. Is there a way to 'tee' results to different outputlookups? 
Good day, I have a query that I would like to add more information onto. The query pulls all users that accessed a AI site and gives my data for weekdays as a 1 or 0 if the site was accessed. The que... See more...
Good day, I have a query that I would like to add more information onto. The query pulls all users that accessed a AI site and gives my data for weekdays as a 1 or 0 if the site was accessed. The query 1 gets a user from index db_it_network and I would like to add the department of each user by querying theindex=collect_identities sourcetype=ldap:query The users are displayed in the collect identities index as 'email' and their department in the bunit field    index=db_it_network sourcetype=pan* url_domain="www.perplexity.ai" OR app=claude-base OR app=google-gemini* OR app=openai* OR app=bing-ai-base | where date_wday="monday" OR date_wday="tuesday" OR date_wday="wednesday" OR date_wday="thursday" OR date_wday="friday" | eval app=if(url_domain="www.perplexity.ai", url_domain, app) | table user, app, date_wday | stats count by user app date_wday | chart count by user app | sort app 0      Note: the |stats | chart is necessary to distinct so that one user return results for one app per day
I can get a numeric table aligned to the left in the statistics field with the  | eval count=printf("%-10d",<your_field>)  However the alignment does not translate to the dashboard.     Any insight... See more...
I can get a numeric table aligned to the left in the statistics field with the  | eval count=printf("%-10d",<your_field>)  However the alignment does not translate to the dashboard.     Any insight on why this does work or if there is another way to align numeric results to the right on a dashboard for aesthetic purposes?
Hello, Need an urgent help. I am using REST API Modular input and the problem is i am not able to set the parameter for event breaking, below is the sample log. { "User" : [ { "record_id" : "2", "... See more...
Hello, Need an urgent help. I am using REST API Modular input and the problem is i am not able to set the parameter for event breaking, below is the sample log. { "User" : [ { "record_id" : "2", "email_address" : "dsfsdf@dfdf.net", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-23T05:28:43.091+00:00", "user_id" : "54216542", "username" : "Audit.Test1", "suspended" : false, "person_id" : "", "credentials_email_sent" : "", "user_guid" : "21SD6F546S2SD5F46", "user_creation_date" : "2024-08-23T05:28:42.000+00:00", "user_last_update_date" : "2024-08-23T05:28:44.000+00:00" }, { "record_id" : "3", "email_address" : "XDCFSD@dfdf.net", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-28T06:42:43.736+00:00", "user_id" : "300000019394603", "username" : "Assessment.Integration", "suspended" : false, "person_id" : "", "credentials_email_sent" : "", "user_guid" : "21SD6F546S2SD5F46545SDS45S", "user_creation_date" : "2024-08-28T06:42:43.000+00:00", "user_last_update_date" : "2024-08-28T06:42:47.000+00:00" }, { "record_id" : "1", "email_address" : "dfds@dfwsfe.com", "email_address_id" : "", "email_type" : "", "email_creation_date" : "", "email_last_update_date" : "2024-08-06T13:27:34.085+00:00", "user_id" : "5612156498213", "username" : "dfsv", "suspended" : false, "person_id" : "56121564963", "credentials_email_sent" : "", "user_guid" : "D564FSD2F8WEGV216S", "user_creation_date" : "2024-08-06T13:29:00.000+00:00", "user_last_update_date" : "2024-08-06T13:29:47.224+00:00" } ]}
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Col... See more...
Hi, I have a requirement where I have a table on my dashboard created using dashboard studio. I need to redirect to another dashboard when clicked on Column A cell. Also, when a user clicks on Column C cell, the user should be redirected to a URL. How can we achieve this Linking of dashboard and URL on the same table? based on the column clicked.
Can I migrate the Splunk Enterprise server from virtual machine to physical server?
Hi. Im trying to monitor MSK metrics by CloudWatch input.  There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as  `[{}]`.  But i can't get any metric from Cloudwat... See more...
Hi. Im trying to monitor MSK metrics by CloudWatch input.  There is no AWS/Kafka in Namespace list. So i just wrote it and set dimension value as  `[{}]`.  But i can't get any metric from Cloudwatch Input. Please help me!    im using Add-on version 7.0.0      
Hi, Im currently working on ingesting 8 csv files from a path using inputs.conf on a UF. And the data is getting ingested . The issue is these 8 csv files are overwritten daily by new data by a aut... See more...
Hi, Im currently working on ingesting 8 csv files from a path using inputs.conf on a UF. And the data is getting ingested . The issue is these 8 csv files are overwritten daily by new data by a automation script so the data inside the csv file is changed daily.   I want to ingest the complete csv data daily into Splunk , but what I can see is only a small set of data is getting ingested but not the complete csv file data.   My inputs.conf is  [monitor://C:\file.csv] disabled = false sourcetype = xyz index = abcd crcSalt = <DATETIME>   Can someone please help me , whether Im using the correct input or not?   The ultimate requirement is to ingest the complete csv data from the 8 csv files daily into Splunk.   Thank you.
Hello! I am trying to collect 3 additional Windows Event logs and I have added them in the inputs.conf, for example   [WinEventLog://Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provid... See more...
Hello! I am trying to collect 3 additional Windows Event logs and I have added them in the inputs.conf, for example   [WinEventLog://Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true    Admin, Autopilot, and Operational, were added the same way. I also added in props.conf   [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Admin] rename = wineventlog [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Autopilot] rename = wineventlog [WinEventLog:Microsoft-Windows-DeviceManagement-Enterprise-Diagnostics-Provider/Operational] rename = wineventlog     The data are coming in, however, none of the fields are parsed as interesting fields. Is there something I am missing? I looked through some of the other conf file, but I think I am in over my head to make a new section in props? I thought the base [WinEventLog] would take care of the basic breaking up of interesting fields like EventID, so I am a bit lost.
How can I implement a post process search using the Dashboard Studio framework?  I can see that there is excellent documentation for doing this XML (Searches power dashboards and forms - Splunk Docu... See more...
How can I implement a post process search using the Dashboard Studio framework?  I can see that there is excellent documentation for doing this XML (Searches power dashboards and forms - Splunk Documentation), but I can't seem to find relevant information for how to do this in the markdown for Dashboard Studio. Note: I am not attempting to use a savedSearch.
I'm not very good with SPL. I currently have Linux application logs that show the IP address, user name, and if the user failed or had a successful login.  I'm interested in finding a successful log... See more...
I'm not very good with SPL. I currently have Linux application logs that show the IP address, user name, and if the user failed or had a successful login.  I'm interested in finding a successful login after one or more failed login attempts. I currently have the following search. The transaction command is necessary where it is or otherwise, all the events are split up into separate events of varying line counts. index=honeypot sourcetype=honeypotLogs | transaction sessionID | search "SSH2_MSG_USERAUTH_FAILURE" OR "SSH2_MSG_USERAUTH_SUCCESS"  Below is an example event. For clarity, I replaced details/omitted details from the logs below. [02] Tue 27Aug24 15:20:57 - (143323) Connected to 1.2.3.4 ... ... [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_FAILURE ... [30] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_REQUEST: user: bob [02] Tue 27Aug24 15:20:57 - (143323) User "bob" logged in [31] Tue 27Aug24 15:20:57 - (143323) SSH2_MSG_USERAUTH_SUCCESS: successful login Any tips on getting my search to find events like this? Currently I only have field extractions for the IP (1.2.3.4), user (bob), and sessionID (143323). I can possibly create a field extraction for the SSH2 messages but I don't know if that will help or not. Thanks!