All Topics

Top

All Topics

Need some help in extracting Group Membership details from Windows Event Code 4627. As explained in this answer, https://community.splunk.com/t5/Splunk-Search/Regex-not-working-as-expected/m-p/4704... See more...
Need some help in extracting Group Membership details from Windows Event Code 4627. As explained in this answer, https://community.splunk.com/t5/Splunk-Search/Regex-not-working-as-expected/m-p/470417 following seems to be working to extract Group_name, but capture doesn't stop once the group list ends. Instead, it continues to match everything till end of line. I experimented with (?ms) and (?m) but didnt have any succes.        "(?ms)(?:^Group Membership:\t\t\t|\G(?!^))\r?\n[\t ]*(?:[^\\\r\n]*\\\)*(?<Group_name>(.+))"           09/04/2024 11:59:59 PM LogName=Security EventCode=4627 EventType=0 ComputerName=DCServer.domain.x.y SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=64222222324 Keywords=Audit Success TaskCategory=Group Membership OpCode=Info Message=Group membership information. Subject: Security ID: NT AUTHORITY\SYSTEM Account Name: DCServer$ Account Domain: Domain Logon ID: 0x1111 Logon Type: 3 New Logon: Security ID: Domain\Account Account Name: Account Account Domain: Domain Logon ID: 0x5023236 Event in sequence: 1 of 1 Group Membership: Domain\Group1 Group2 BUILTIN\Group3 BUILTIN\Group4 BUILTIN\Group5 BUILTIN\Group6 NT AUTHORITY\NETWORK NT AUTHORITY\Authenticated Users Domain\Group7 The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network). The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on. This event is generated when the Audit Group Membership subcategory is configured. The Logon ID field can be used to correlate this event with the corresponding user logon event as well as to any other security audit events generated during this logon session.       When I use this regex, it does capture starting from the Group list but continues on till the end of event. How can I tell regex to stop matching once the group list ends? Also, this regex seems to be putting all groups as a single match. Is it possible to make it multi-valued, so that we can count total number of groups present in a given event, e.g. 9 groups in the event example above.   Thanks, ~Abhi
I'm working with Dashboard Studio for the first time and I've got a question. Originally I created a table search that returns data depending on what is in the $servers_entered$ field.  That works. ... See more...
I'm working with Dashboard Studio for the first time and I've got a question. Originally I created a table search that returns data depending on what is in the $servers_entered$ field.  That works.  I have been asked to add two single value fields.  The first is showing the number of servers in the $servers_entered$ field and that works.  The second is showing the number of servers in the table search.  There should be a way of linking that information, but I can't figure out how.  I could run the search again, but that is rather inefficient. How do you tie the search result count from a table search to a single value field? TIA, Joe
I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this examp... See more...
I'm missing something and it's probably blatantly obvious.... I have a search returning a number but I want to have a fillergauge show the value as it approaches a maximum value.  In this example, I'd like the gauge to cap at 10,000 but it always shows 100.   
Register here. This thread is for the Community Office Hours session on Observability: Digital Experience Monitoring on Wed, October 23, 2024 at 1pm PT / 4pm ET. This is your opportunity to ask ques... See more...
Register here. This thread is for the Community Office Hours session on Observability: Digital Experience Monitoring on Wed, October 23, 2024 at 1pm PT / 4pm ET. This is your opportunity to ask questions related to your specific Digital Experience Management (DEM) questions with Splunk Real User Monitoring (RUM) and Splunk Synthetics, including: Gaining a full view of the end user experience Running front-end/back-end investigations to pinpoint errors Running synthetics tests to proactively predict app and website performance Measuring KPI's focused on customer experience Anything else you’d like to learn! We look forward to seeing you there!  Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).  Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants. Look forward to connecting!
Introduction In our last post, we went over Splunk Synthetic Monitoring basics to kickstart proactive performance monitoring, improve user experience, and meet SLAs. Now let’s dig into more detail ... See more...
Introduction In our last post, we went over Splunk Synthetic Monitoring basics to kickstart proactive performance monitoring, improve user experience, and meet SLAs. Now let’s dig into more detail and build a Browser test using the Google Chrome Recorder.  Using the Chrome Recorder to build out Browser tests is the recommended way to capture complex and critical user flows like signup, login, and checkout. It’s simpler and more resilient than manually targeting elements using things like XPath expressions and lets you quickly get up and running with Synthetic Monitoring.  After we use the Google Chrome Recorder to record an interaction with our online boutique e-commerce website, we’ll import our recording into Splunk Synthetic Monitoring. Once imported, we’ll organize our test, view the results, and alert on failures. To follow along, you’ll need the Google Chrome Browser and access to Splunk Observability Cloud (psst! Here’s a 14 day free trial).  Building a Browser Test For our online boutique, checkout is the most critical business process, so we’d like to monitor it using Splunk Synthetic Monitoring. To do this, we’ll create a recording of the checkout flow by following the record, replay, and measure user flows example in the Chrome DevTools Docs.  With our Product Checkout recording complete, we’ll export it from the browser as JSON:  Moving over to Splunk Observability Cloud and navigating to Synthetics, we can use this recording to create our Browser test.  First, we’ll add a new Browser test: After we configure our new test by setting the necessary values, we can import our recording by selecting Import, (side note: you won’t be able to select Import until you provide a name for your test): Once the JSON file is uploaded, we can continue to edit our test, or we can try out our new test to make sure the configuration is valid by selecting Try now…: We’ll see output from our test run, but these results are ephemeral and don’t impact our overall test run metrics. It looks like our test run was successful, so let’s take a moment to celebrate how easy that was! Now on to fine-tuning. Test Organization It looks like our test is made up of one big, long interaction, which isn’t super helpful for future troubleshooting purposes:  Transactions help us break our Synthetic tests into logical steps that represent user flows. Right now, it looks like our test has one step, when in fact, we took multiple steps (browsing the catalog, adding an item to the cart, actually placing the order) when we were recording the interaction with our site. If we go back and edit our test to include transactions, we’ll be able to scope our results to each transaction and quickly identify the exact points where we encounter performance issues. Let’s see what this looks like.  First, we’ll close out of our Try now results. Then we’ll select Add synthetic transaction, which will add a new transaction section:  Let’s name our first transaction Home Page. We’ll delete this auto-populated Click step and drag our first “Go to url” step into this new transaction:  We’ve gone ahead and organized the remaining steps into transactions:  Let’s see what a test run looks like with these more discrete transactions:  The Business Transactions section of our run results is now broken down into our defined transactions. We can click on these transactions to filter filmstrip and waterfall results and also use them to identify, at a glance, when a step in our test fails (we’ll see this in a bit).  Adding Assertions and Detectors Before we call this test good, we need to add some assertions so that our test will actually succeed/fail on defined success/failure conditions. It would also be helpful to receive a notification whenever our test fails so we can resolve any issues before our customers are impacted. Let’s close out these Try now results and continue editing our test.  To create an assertion, we first add a step to the transaction we want to validate. This will auto-populate with a Click action. If we select the Click action and expand the dropdown, we can scroll down to view the available Assertions:  In our Home Page transaction, let’s assert the text “Free shipping with $75 purchase” is visible, so we know we’ve successfully loaded the HTML for our page: We could also validate the presence or absence of elements on the page by adding assertions for things like specific products. These types of assertions are more robust and help test out database connections to further ensure critical paths are up and running. After we’ve added assertion steps to each of our transactions, we can Return to test and submit our first Browser test.  Note: refreshing the page or selecting Editing Checkout Process at the top of the page won’t save any of the current changes. If you want to save progress, it’s best to submit the test and then make incremental edits along the way so you don’t lose updates.  Our test is now active and running, and if we select our test from the Overview page, we can see the results:  We don’t yet have line graph charts for the last day, 8 days, and 30 days since we just created this test, but we do have Uptime Trends, Availability, and Performance KPIs. We can select a test run from the Recent run results or a plot point from our Availability chart to view test run results.  From our run results page, we can see right away that our test failed thanks to the red banner at the top of the page:  We can also easily see which transaction failed because we should have 5 transactions, but instead, we only have 3. It looks like the assertion we set on our Add to Cart transaction failed so the other 2 transactions didn’t execute.  Rather than constantly watching test runs, let’s add a detector for these kinds of failures. We could have added a detector when we initially configured our test: Or we can add detectors from our test details page:  We’ll create a detector and name it Checkout Process Downtime. This detector will alert on Downtime that exceeds the given threshold of 10%. Every failed test run contributes to this downtime threshold, so if test run failures exceed our set threshold, we’ll get alerted. When creating a detector, we can conveniently see how frequently it will alert based on the thresholds we set so we can fine-tune them: Wrap Up That’s it! We now have a Splunk Synthetic Monitoring Browser test imported from the Google Chrome Recorder. This test will ensure our critical checkout workflow is performing as expected and alert us when it’s not so we can resolve issues before our users are impacted.  If you’re ready to build confidence around your user’s experiences, meet SLAs, and maintain a competitive edge when it comes to your application’s performance, start by building out your own Splunk Synthetic Monitoring Browser tests. Either head over to Synthetics in Splunk Observability Cloud to get started or sign up for a Splunk Observability Cloud 14 day free trial.  Resources Using the Chrome Recorder With Splunk Synthetic Monitoring Use a Browser test to test a webpage Running Synthetics Browser Tests
are there any working screenshots or demo available for this app? there seems to be no video tutorial or any guidance docs beside the main doc.  Any guidance would be helpful -  I am looking for ... See more...
are there any working screenshots or demo available for this app? there seems to be no video tutorial or any guidance docs beside the main doc.  Any guidance would be helpful -  I am looking for a way to get JIRA->Splunk data in whenever there is a change in the issue or just able to query all the issues in JIRA via splunk and pull back stats 
Below quite simple query to fill drop down list in my dashboard.    index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file The point is it t... See more...
Below quite simple query to fill drop down list in my dashboard.    index=gwcc | eval file=lower(mvindex(split(source,"/"),-1)) | dedup file | table source, file | sort file The point is it takes 30-60 seconds to generate it.   Do you have an idea how to simplify it ? Or write in more efficient way ?  
I'm looking into upgrading Splunk Enterprise from 9.0.4 to 9.3.0. following the upgrade docs, there's a step to backup the KV Store. Check the KV store status To check the status of the KV store, ... See more...
I'm looking into upgrading Splunk Enterprise from 9.0.4 to 9.3.0. following the upgrade docs, there's a step to backup the KV Store. Check the KV store status To check the status of the KV store, use the show kvstore-status command: ./splunk show kvstore-status When I run this command, it's asking me for a splunk username and password.  this was handed over by a project team, but nothing was handed over about what the splunk password might be, or also if we actually  use a KV store.  I've tried the admin password, but that's not worked. I've found some splunk documents advising the KV store config would be in $SPLUNK_HOME/etc/system/local/server.conf, under [kvstore] There is nothing in our server.conf under kvstore. I've also found some notes talking about KVStore not starting if there's a $SPLUNK_HOME\var\lib\splunk\kvstore\mongo\mongod.lock file present We have 2 splunk servers - one of these has a lock file dated Oct 2022, and the other dated July 19th.  So based on this, I suspect it's not used otherwise we'd have hit issues with it before? That's just a guess, but this is my first foray into splunk, so I thought I'd ask if, based on the above scenarios whether I need to back up the KV store or not, or are there any other checks to confirm definitively if we have a KV store that's used? thanks in advance  
After updating the SSL keys, events with errors "ExecProcessor from python /opt/splunk/etc/apps/SA-Hydra/bin/bootstrap_hydra_gateway.py" from the source "/opt/splunk/var/log/splunk/splunkd.log" began... See more...
After updating the SSL keys, events with errors "ExecProcessor from python /opt/splunk/etc/apps/SA-Hydra/bin/bootstrap_hydra_gateway.py" from the source "/opt/splunk/var/log/splunk/splunkd.log" began to be sent to the index "_internal". Splunk version is 7.3.2..
Hi all, Has anyone had experience matching Linux audit logs to CIM before? I installed the Add-on for Unix and Linux, but it didn't help. Looking at some of the use cases in Security Essentials, it... See more...
Hi all, Has anyone had experience matching Linux audit logs to CIM before? I installed the Add-on for Unix and Linux, but it didn't help. Looking at some of the use cases in Security Essentials, it seems they expect data from EDR solutions like CrowdStrike or Symantec, rather than local Linux audit logs. Does this mean there is no way to use the out-of-the-box use cases created in Security Essentials/Enterprise Security for Linux logs?   Thanks
I have an input playbook with two output variables. I can retrieve these variables when I call the playbook using the playbook block in the UI. However, I now need to loop over items in a list and c... See more...
I have an input playbook with two output variables. I can retrieve these variables when I call the playbook using the playbook block in the UI. However, I now need to loop over items in a list and call the playbook for each item in that list, this requires using the phantom.playbook function. From what I can see, there is no way to retrieve the output of this playbook now, is that correct?   Example below: for item in prepare_data__post_list: phantom.playbook(playbook="local/__Post_To_Server", container={"id": int(container_id)}, inputs={"body": item, "headers": prepare_data__headers, "path": prepare_data__path})
Hello Splunkees,   what are the differences between the different options for app updates? I know 3 diffentent ways to update an app:   1) Via webinterface: Apps -> Manage Apps -> Install app fro... See more...
Hello Splunkees,   what are the differences between the different options for app updates? I know 3 diffentent ways to update an app:   1) Via webinterface: Apps -> Manage Apps -> Install app from file -> Check 'Upgrade app. Checking this will overwrite the app if it already exists.' 2) Via CLI:  ./splunk install app <app_package_filename> -update 1 -auth <username>:<password> 3) Extract the content of the app.tgz to $SPLUNK_HOME/etc/apps/ (if app already exists, override files) and after that restart splunk service.   Background of my question: I want to implement an automated app update process with ansible for our environment and I want to use the smartest method. Currently, we're using Splunk 9.1.5.   Thank you!   BR dschwarz
KPIのみを表示するサービスアナライザーを作成したいのですが、作成することは可能ですか?可能であれば手順を知りたいです。
I Have 60 Correlation Search in Content Management  Some of my Correlation Search doesn't trigger to Incident Review but when i search it manually it show the result. No Suppression, No Throttling, ... See more...
I Have 60 Correlation Search in Content Management  Some of my Correlation Search doesn't trigger to Incident Review but when i search it manually it show the result. No Suppression, No Throttling, and now i'm confused.  Someone help me please
Hi everyone, I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in t... See more...
Hi everyone, I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in the index. I have tried to tcpdump and I can see the logs arriving at my Splunk instance. below I attach the syslog configuration and tcpdump result on my splunk instance. What could be the cause of this issue, and what steps should I take to troubleshoot it? Thanks for any insights!
Hi,   The Splunk Heavy Forwarders and Deployment Servers were running under Splunk user. Unfortunately, during the upgrade process, some admin used the root account, and now the these Splunk instan... See more...
Hi,   The Splunk Heavy Forwarders and Deployment Servers were running under Splunk user. Unfortunately, during the upgrade process, some admin used the root account, and now the these Splunk instances are running as root. How can I switch back to the Splunk user? These instances are running on Red Hat Linux.
Experiencing an issue on a few random servers, some Domain Controllers and some Member Servers. Windows Security Event logs just seem to randomly stop sending. If I restart the Splunk UF, then the ev... See more...
Experiencing an issue on a few random servers, some Domain Controllers and some Member Servers. Windows Security Event logs just seem to randomly stop sending. If I restart the Splunk UF, then the event logs start gathering. We are using the Splunk UF 9.1.5, but I also noticed this issue on Splunk UF 9.1.4. I thought it had been corrected when we upgraded to Splunk UF 9.1.5, but its re-appeared - most recent occurrence seemed to occur roughly about 3 weeks ago on 15 servers across multiple clients we manage. This unfortunately has resulted in the loss of data for a few weeks as the local event logs eventually got discarded as the data filled up. I have now written a Splunk Alert to notify us each day if any servers are in this situation (compares the Windows servers reporting into two different indexes, one index is for Windows Security Event logs), so we can more easily spot the issue. We are just raising a case with Splunk support today about the issue.
Hello Members,   I have configured splunk HF to recieve data input as port 1531/udp    i used command firewall-cmd --permanent --zone=public --add-port=1531/udp   but when i used firewall-cmd -... See more...
Hello Members,   I have configured splunk HF to recieve data input as port 1531/udp    i used command firewall-cmd --permanent --zone=public --add-port=1531/udp   but when i used firewall-cmd --list-all dosen't appear on the opening ports is this consider a problem and also checked netstat and the port is listening on 0.0.0.0 (all)   thanks
Hi Everyone, I have some events with the field Private_MBytes and host = vmt/vmu/vmd/vmp I want to create a case when host is either vmt/vmu/vmd and Private_MBytes  > 20000 OR when host is vmp an... See more...
Hi Everyone, I have some events with the field Private_MBytes and host = vmt/vmu/vmd/vmp I want to create a case when host is either vmt/vmu/vmd and Private_MBytes  > 20000 OR when host is vmp and  Private_MBytes > 40000 then it should display the events with severity_id 4. Example eval severity_id=if(Private_MBytes >= "20000" AND host IN [vmd*,vmt*,vmu*],4,2) eval severity_id=if(Private_MBytes >= "40000" AND host ==vmp*,4,2)   Note :  if Private_MBytes > 40000, and then if there is any vmd/vmu/vmt it should display severity_id 4 only and for vmp also.
  How do I change the directory path for the error below. the problem is with the /bin/bin in the path.  Any help is greatly appreciated!