All Topics

Top

All Topics

Hello, I am attempting to start a Splunk docker container (search head) and add it as a search peer to an existing environment all in one bash script but running in an issue. I am able to run each ... See more...
Hello, I am attempting to start a Splunk docker container (search head) and add it as a search peer to an existing environment all in one bash script but running in an issue. I am able to run each of the two steps separately without a problem but am running into an issue when I attempt to combine them into one script.  I am able to build my Dockerfile and start the container successfully. I am running the below command to start a container with the name splunk_sh.     docker run -d --rm -it -p 8000:8000 --name splunk_sh dockersplunk:latest     After the container is up, I am also able to successfully add it as a search peer using the following command and script. (A copy of the search_peer.sh script is being copied to my container via Dockerfile.)     # search peer command docker exec -it splunk_sh sh /opt/splunk/bin/search_peer.sh     search_peer.sh      #!/bin/bash sudo /opt/splunk/bin/splunk add search-server https://<indexer_ip>:8089 -auth <user>:<password> -remoteUsername <user> -remotePassword <password>     Running the two above steps separately allows me to start my Splunk container and have it become a search peer. I begin to run into an issue when I try to run a script (docker_search_peer.sh) that includes both steps, starting the splunk_sh container and the search peer command.  docker_search_peer.sh     #!/bin/bash docker run -d --rm -it -p 8000:8000 --name splunk_sh dockersplunk:latest docker exec -it splunk_sh sh /opt/splunk/bin/search_peer.sh     When I run my docker_search_peer.sh script, the container is able to start but is not able to become a search peer. I get the below error:     ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in environment     I've disabled selinux (this was mentioned in a few different posts) but am still running into this issue. I'm not sure how I'm able to run commands/execute scripts separately but not together in one script. Any help or guidance would be much appreciated. 
Local user accounts are migrating to the AppDynamics Identity Platform. Here’s what you need to know.  FOR NOTIFICATION UPDATES — Click the caret menu above right, then Subscribe. Beginning wi... See more...
Local user accounts are migrating to the AppDynamics Identity Platform. Here’s what you need to know.  FOR NOTIFICATION UPDATES — Click the caret menu above right, then Subscribe. Beginning with our SaaS Controller release in April 2023, AppDynamics-managed user accounts will begin migrating to the AppDynamics Identity Provider. With the AppDynamics Identity Provider, users will benefit from single sign-on access to Controller tenants and other AppDynamics resources, as well as best-in-class identity management.  This article gives the reasons and benefits for this change, outlines the migration timing and process, and explains the continued user experience.  In this article…  What's happening and why? • How will the Identity Provider Migration work? • When is the Identity Provider Migration coming? Frequently Asked Questions   Before the Identity Provider Migration • Will this change disrupt user access to the system? • What if I only use my local logins for service accounts? • What if I have more than user account on the SaaS Controller? • Does this Identity Migration affect my SAML users? • Does this Identity Migration affect my LDAP users?  • How many skips do users get before they must complete the process? During the Identity Provider Migration • What if I accidentally entered the wrong email address?  • What if I never receive the migration email?   • Why am I being asked to sign in again after after signing in and initiating the migration using my email address? Additional resources  • Related articles and posts What’s happening and why?  Beginning with our SaaS Controller release in April 2023, local (AppDynamics managed) user accounts will begin the process of migrating to the AppDynamics Identity Provider.   As we previously announced, AppDynamics managed user accounts added from the 21.11 release onward are already part of our AppDynamics Identity provider, which gives the benefit of single sign-on (SSO) across Controller tenants and access to all resources available at our AppDynamics website (such as University, Support, and Community). Further, these accounts are protected by a best-in-class identity provider, enabling proper security and a platform for continued improvement in account security options.   In order to ensure all older AppDynamics-managed user accounts gain all the same benefits, they will need to be migrated to this identity system. Our aim in the migration is to avoid disruption and provide a simple process. The migration process is user-driven and triggered after a successful login. Once a user migrates successfully, all their future logins on that Controller account will use the new user account with a username equal to their email address. They will retain all previous access without interruption. How will the Identity Provider migration work?  Upon successful login using a user account that has not yet been migrated, the user will be prompted to complete the migration process for the account. Users may choose to skip the process up to 3 times.  Users will be asked to provide their email address They will then be routed to the Controller account directly and be sent an email with migration instructions  Once they access the email and follow the prompts to set their new password, their account will complete migration. Subsequent logins will use their new AppDynamics identity.   What about users who already have an AppDynamics Identity Provider account?  In some cases, users already have accounts in the AppDynamics Identity Provider. This is true for users whose user email is listed in the Accounts Management Portal as created by the administrator. Typically, this is because they have completed training, filed Support tickets, or participated in Community posts. For these users, the migration process will result in their local Controller user account being migrated to their existing AppDynamics identity.   Once a user's account is migrated on a specific Controller account, they will access this Controller account using their email-based username and the password established for that Controller account. Their access rights will remain unaffected: they will have exactly the same rights as before the migration. Their user account will show a new username on the Controller user administration experience. When is the AppDynamics Identity Provider migration coming?  This migration will begin in April with the 23.4 SaaS Controller release. However, we will be rolling it out slowly over the subsequent weeks. So, if you don't see this as soon as your account is updated to 23.4, we just haven't enabled it for you yet.   If, as a Controller Account Owner, you wish to have migration enabled immediately, please feel free to create a Support request and we will enable it for your account.   Back to Top  Frequently Asked Questions  BEFORE THE IDENTITY PROVIDER MIGRATION  Will this change disrupt user access to the system?    We have taken many precautions in developing this experience to minimize impacts to users. We are only changing the user's authentication source within the system. The user's record will remain attached to all existing content and rights as is.  Further, users will be given skips in case they don’t want to or can’t complete migration during that session.  Ultimately, the Identity Migration will benefit users in giving them one account for everything AppDynamics and enabling a much more secure identity experience.   However, should your users experience issues, please reach out to Support and we will solve the problem with you. Further, should you wish to test this and have a pre-production account, we can roll this out to your pre-production tenant first for confirmation. Please reach out to Support to make this request.  Back to Top    What if I only use my local logins for service accounts?    We always recommend that the use of user accounts for integration is inherently insecure. Instead, please use the API Client capabilities for integrations.   That said, the migration is for human users that use the login experience. Any code-based logins using the local user credentials will never trigger the migration and will remain using local logins.  Back to Top  What if I have more than one user account on the SaaS Controller? If you use more than one username on your SAAS controller account, you will be required to provide an email address for each of those usernames. However, because the system expects usernames to be unique, once you migrate your first username, the subsequent usernames will need a new email address.  We recommend that you start by first logging into the username you use the most, that best represents your typical usage.  For this one, provide your email address and complete migration.  For subsequent usernames, try using the "+" approach for your email address.  Some email systems, like Gmail and Exchange, allow you to append something to your base email address with a + sign. The emails will still be routed to your inbox so that you can follow the links there.  For example, let's assume you have 3 usernames on the Controller: user, financialsupport, and techsupport. Your email address is user@company.com. When you log in using the username of user, you will provide user@company.com for your email address and complete the migration process.  From then on, when you want to log in to the user account you will log in using the username of user@company.com.  However, to log in using the techsupport username, you will need to provide an email address for migration. If you use the user@company.com email address, you will receive a message telling you to choose a different email because this one is in use.  Here, try using user+techsupport@company.com.  And for the financialsupport username, you would use user+financialsupport@company.com.  When you add the +<username> to your email in this way, you should still receive the necessary migration emails (with completion links) in your user@company.com inbox, allowing you to complete the process for these accounts. Please note that these are new identities in our AppDynamics Identity Platform and can be used as any other user account.  If you have multiple CSAAS Controller accounts with similar identities, reuse these emails on the other Controllers to link them up and gain single sign-on for that account. Back to Top    Does this Identity Migration affect my SAML users?   No, this only applies to users who are listed in the user management administration screen on the controller under the "AppDynamics" drop down. Any SAML users will continue to see the same experience they have today.  Back to Top    Does this Identity Migration affect my LDAP users?    No, this only applies to users who are listed in the user management administration screen on the controller under the "AppDynamics" drop down. Any LDAP users will continue to see the same experience they have today.   Back to Top    How many skips do users get before they must complete the process?  Users get 3 skips for starting the migration and then 3 more skips to complete the migration. This means that there are 6 logins using the old identity before they are required to complete the migration to the new, secure identity.  Back to Top    DURING THE IDENTITY PROVIDER MIGRATION  What if I accidentally entered the wrong email address?  Once you provide the email address and are in the Controller, you will see a confirmation message that displays your email address as well as links to resend the mail or change your email address.  You can choose to change your email address and we will send the migration email to the new address.  Back to Top    What if I never receive the migration email?  First, check to confirm you entered the correct email address  Sign back into the Controller account using your old username and password. Once signed in, you will see a reminder that an email has been sent, and the address to which it was sent. You will have the options to: • Resend the message  • Change your email address  If the email address is correct, please check your spam and junk folders. If you still don’t see the email message, select “resend the message” and try again.  If you continue to have problems receiving the email message, reach out to Support.   Back to Top    Why am I being asked to sign in again after signing in and initiating the migration using my email address?  I thought I was supposed to be directed into the Controller immediately.    This is because the email you entered is affiliated with an existing user account in our AppDynamics Identity provider. We want to make sure that you are the owner of this account before migrating your Controller user account to the entered AppDynamics Identity Provider account.   Once you log in with the password associated with your email address (your AppDynamics Identity provider user account) successfully, we will migrate your Controller account.   If the email you entered is correct and you don't recall your password, you can use the forgot password flow to reset it. For more information, see I'm stuck migrating my account because I can't log in or don't know what to do next. Back to Top  Additional Resources  Announcement | Users who are part of our AppD IDP system will begin seeing an SSO experience  FAQ - Sign in once to AppD and see everything  I'm stuck migrating my account because I can't log in or don't know what to do next.
Hello, I am upgrading 800 splunk universal forwarders with Red Hat Linux OS. Using this custom app When I assign this custom app to  the universal forwarders, it shuts down and does not start again.... See more...
Hello, I am upgrading 800 splunk universal forwarders with Red Hat Linux OS. Using this custom app When I assign this custom app to  the universal forwarders, it shuts down and does not start again. With version  7.2.6, it works fine. With no issue. Anything after that version, the script does not work. 03-21-2023 12:26:05.128 -0500 INFO PipelineComponent - Performing early shutdown tasks 03-21-2023 12:26:05.128 -0500 INFO loader - Shutdown HTTPDispatchThread 03-21-2023 12:26:05.128 -0500 INFO ShutdownHandler - Shutting down splunkd 03-21-2023 12:26:05.128 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_Begin" 03-21-2023 12:26:05.128 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_FileIntegrityChecker" 03-21-2023 12:26:05.128 -0500 INFO ShutdownHandler - shutting down level "ShutdownLevel_JustBeforeKVStore
In today’s post, I'm excited to share some recent Splunk Mission Control innovations. With Splunk Mission Control, your SOC can detect, investigate and respond to threats from one modern, unified wor... See more...
In today’s post, I'm excited to share some recent Splunk Mission Control innovations. With Splunk Mission Control, your SOC can detect, investigate and respond to threats from one modern, unified work surface, bringing order to the chaos of your security operations. In Mission Control, you'll have access to Splunk's industry-leading security technologies and partner ecosystem in one place. Solving Your Most Complex SOC Challenges Using Mission Control, you can solve the most pressing security operations challenges for your team. First, it helps resolve the problem of detection, investigation and response being spread across siloed tools while security insights are diffused across interfaces, making it difficult to achieve intelligent situational awareness. Secondly, it helps guide your teams through scattered SOC procedures and dispersed data across multiple systems. Finally, you can shift to a more proactive mode and preempt fatigue by getting past the never ending detections and manual processes. Taking Your SOC to the Next Level Mission Control not only automates security operations, but also unifies detection, investigation, and response capabilities to empower your security operations team. With Mission Control, you can streamline your security workflows with response templates and modernize your security operations with automation. Here are some specific capabilities you receive with Mission Control: Scalable security analytics See a single queue of all your high-fidelity incidents consisting of your prioritized risk notables. Stop attacks fast with automated analysis of complex attack chains. Standardized SOC processes Speed up investigations with pre-built OOTB response templates that include embedded searches, actions and playbooks to empower security analysts. Orchestration, automation and response Launch SOAR playbooks and actions without leaving the console. Plug and play with the integrations you need across your use cases. Case management Use response templates to add custom notes, intelligence data and relevant files to document work within an investigation. Metrics and reporting Reference historical data from your response template tasks to deliver detailed SOC metrics, reporting and auditability. Integrated intelligence enrichment* Fully investigate security events or suspicious activity by accessing the relevant and normalized intelligence to better understand threat context. (*regional limitations may apply)   Getting Started Eligible users of Splunk Enterprise Security Cloud will be able to access Mission Control in just a few clicks once Mission Control is available on your stack. In the coming weeks, if you are an eligible user, you will be able to find Mission Control in the Splunk Apps dropdown as an available application. From there, it’s just a simple click to enable Mission Control and you’re off to the races!    If you want to learn more about Mission Control, please check out our updated web page and docs site. Stay tuned as we will also be posting more demo videos, blog posts, and content in Lantern to help you get started. 
Search 1. | inputlookup test1.csv | table ITEM1 ITEM2   Search 2. | inputlookup test2.csv | table ITEM 1 ITEM3   Conclusion. I want it to show |table ITEM1 ITEM2 ITEM3   but m... See more...
Search 1. | inputlookup test1.csv | table ITEM1 ITEM2   Search 2. | inputlookup test2.csv | table ITEM 1 ITEM3   Conclusion. I want it to show |table ITEM1 ITEM2 ITEM3   but my results are showing ITEM1 ITEM2 ITEM1 ITEM2 ITEM1               ITEM3 ITEM1               ITEM3     Question. How can I join the Item1s? so that I get a result of ITEM1 ITEM2 ITEM3
I am attempting to audit the usage of commands such as chown or chomod on my linux environment.  Through the below query I am able to see the list of user's, hosts, and the commands that were run but... See more...
I am attempting to audit the usage of commands such as chown or chomod on my linux environment.  Through the below query I am able to see the list of user's, hosts, and the commands that were run but not the files or directories that they were run on.  There are no fields in the event viewer that show filepaths or directories of any kind. index=myindex  comm="chmod"  | table date , host , AUID , comm , exe , source   Any assistance would be appreciated.  Pretty new to Splunk
Hi All, We have recently installed Enterprise Security but strangely the default dashboard doesn't display the indexes we have in our environment. Initially I though the indexes are not CIM complia... See more...
Hi All, We have recently installed Enterprise Security but strangely the default dashboard doesn't display the indexes we have in our environment. Initially I though the indexes are not CIM compliant but it wasn't the case as many of them are. Unfortunately, I am running out of ideas and need some help configuring it. Need someone who can help me with it. Thanks much,
Hello, I am attempting to replace a large unwieldy macro with a data model. Part of the macro is a rex command that finds what we call "confusable characters" that are the highbit versions of ASCII c... See more...
Hello, I am attempting to replace a large unwieldy macro with a data model. Part of the macro is a rex command that finds what we call "confusable characters" that are the highbit versions of ASCII characters, like 𝟐 or ꓜ, and replaces them with the ASCII versions (2 or Z respectively), like this: rex field=$arg1$ mode=sed "y/𝟐𝟚𝟤𝟮𝟸ꝚƧϨꙄᒿꛯ/22222222222/" The actual macro is much longer and encompasses all numbers and letters. I have been having difficultly figuring out how to incorporate this into the data model. I've been able to use a CSV lookup like this: char_search,old_char,new_char *𝟐*,𝟐,2 *ꓜ*,ꓜ,Z Make char_search a wildcard match field, and use this query: | makeresults | eval t="dfasdf𝟐𝟐" | lookup CSVconfusables char_search as t OUTPUT | eval u=replace(t,old_char,new_char) It works find with 1 character to replace, but when there are multiple to replace, the lookup output fields become multivalue and replace doesn't work: | makeresults | eval t="ꓜdfasdf𝟐𝟐" | lookup CSVconfusables char_search as t OUTPUT | eval u=replace(t,old_char,new_char) Is there any way to accomplish what the macro is doing in a data model? Thanks in advance!
Hi All, I want chart to be created in the below way. The x-axis needs to have date and time like that. the chart i am able to create is . i tried to do eval strftime to _time but n... See more...
Hi All, I want chart to be created in the below way. The x-axis needs to have date and time like that. the chart i am able to create is . i tried to do eval strftime to _time but not getting the desired result. The 1st query I tried -  index=unix (source=cpu sourcetype=cpu) OR (sourcetype=vmstat) host IN (usaws135000) | fields _time cpu_load_percent memUsedPct swapUsedPct host | timechart span=1h eval(round(avg(cpu_load_percent),2)) as CPUAvg eval(round(avg(memUsedPct),2)) as MemoryAvg eval(round(avg(swapUsedPct),2)) as SwapAvg by host useother=t limit=0 The 2nd query i tried -  index=unix (source=cpu sourcetype=cpu) OR (sourcetype=vmstat) host IN (usaws1350) | fields _time cpu_load_percent memUsedPct swapUsedPct host | bin span=1h _time | eval _time=strftime(_time,"%a %b %d %Y %H:%M:%S") | stats avg(cpu_load_percent) as CPUAvg avg(memUsedPct) as MemoryAvg avg(swapUsedPct) as SwapAvg by _time
Hello everyone, I have a question for you   I have this table :   But , I want to have first : - the evenement Dépôt in the second line : the evenement Pré-contrôle   I do... See more...
Hello everyone, I have a question for you   I have this table :   But , I want to have first : - the evenement Dépôt in the second line : the evenement Pré-contrôle   I don't know how to do this. Can you help me please.
A question about the architecture of the HomeLab Hello, I am a Splunk Enterprise Certified Admin who has an opportunity to advance to Splunk Architect with someone retiring. I plan on taking the Sp... See more...
A question about the architecture of the HomeLab Hello, I am a Splunk Enterprise Certified Admin who has an opportunity to advance to Splunk Architect with someone retiring. I plan on taking the Splunk Architect courses but would like to set up a home lab to give myself practice and experience. To best prepare me, I’d like to set up a virtual Home Lab with a Splunk distributed search environment, an indexer cluster, and a deployment server to deploy all the apps to the forwarders. How many total Ubuntu Server VMs in Hyper-V should I spin up? I think one search head, at least two indexers (right?), the deployment server, a management node, and possibly an HF for practice. So maybe a total of six VMs? Or is that too few….or too many? It depends on how many Splunk roles each VM can play, which I’m not entirely sure about. It isn’t easy to find this information online. I’m not planning on ingesting much data, just a few data sources for practice. This is more of a Proof of Concept and learning opportunity for me from an architectural perspective.
I'm attempting to auto-assign users to certain types of Notable events under "Default Owner". For some reason only 20/24 users are showing up as options. The users that are missing from the drop down... See more...
I'm attempting to auto-assign users to certain types of Notable events under "Default Owner". For some reason only 20/24 users are showing up as options. The users that are missing from the drop down have accounts with the same role as the other users and they have logged into Enterprise Security before.
Hi All, I`m looking to combine the two  searches below.  I have been messing around with it, but I don`t do this alot! - but I thought rather than put in my ramblings I would ask the basic need of... See more...
Hi All, I`m looking to combine the two  searches below.  I have been messing around with it, but I don`t do this alot! - but I thought rather than put in my ramblings I would ask the basic need of the question. I basically want the 'state', 'startTime' and 'completeTime' from the second search to be added to the first search # search 1 index=vmware-taskevent sourcetype=vmware_inframon:events fullFormattedMessage="Task:*" | stats by info.entityName fullFormattedMessage info.entity.type info.queueTime userName vm.name computeResource.name createdTime info.task.moid | sort createdTime | table info.entityName fullFormattedMessage info.entity.type info.queueTime userName vm.name computeResource.name createdTime info.task.moid # search 2 index=vmware-taskevent sourcetype="vmware_inframon:tasks" | stats by entityName name queueTime startTime completeTime entity.type state reason.userName task.moid | table entityName name queueTime startTime completeTime entity.type state reason.userName task.moid There are common results from fields but not common field names.  Ie sourcetype=vmware_inframon:events has 'info.task.moid' and sourcetype="vmware_inframon:tasks" has 'task.moid' and the results from this field matches. This is the same for info.entityName and entityName
(Running v9.0.2208 of Splunk Cloud) When I load a dashboard with external URLs in they throw up an external content warning - how do I get rid of these? In the version we're running, I cannot upd... See more...
(Running v9.0.2208 of Splunk Cloud) When I load a dashboard with external URLs in they throw up an external content warning - how do I get rid of these? In the version we're running, I cannot update 'Settings > Server settings > Dashboards Trusted Domains List' as I believe that is only available in v9.0.2209. I'm also unable to enable automatic UI updates which is the fix in the current version. I've tried to create a web-features.conf but not having any luck. Thanks!
For SOAR v5.3.5 there is a pre-req that /tmp has min 5Gb free. Does anyone know if the script soar-install can be passed an option not to check disk space. There us to be an option in earlier ver... See more...
For SOAR v5.3.5 there is a pre-req that /tmp has min 5Gb free. Does anyone know if the script soar-install can be passed an option not to check disk space. There us to be an option in earlier versions to pass it no-space-check 
Hi All, We have two different on-prem environment one lower and higher environment. While promoting the ITSI changes from lower environment to higher environment using ITSI full backup and resto... See more...
Hi All, We have two different on-prem environment one lower and higher environment. While promoting the ITSI changes from lower environment to higher environment using ITSI full backup and restore method i am facing the below issues.. I am unable to restore the newly/custom created ITSI Import objects (which is stores under itsi/local app as part of creating entities from saved search and setting up a recurring import) As per the documentation if the savedsearch is stored itsi/local this excluded from backup and restore then what process to follow to promote this to higher environment.     As part of this backup and restore by default all the entities are promoted to higher is there any way to restrict to promoting entities alone because based on the environment the entities changes.. Thanks in advance  
So currently I have a line chart below with a marker for each data point and here is the code below for that. <option name="charting.chart.showMarkers">true</option> <option name="charting.chart.ma... See more...
So currently I have a line chart below with a marker for each data point and here is the code below for that. <option name="charting.chart.showMarkers">true</option> <option name="charting.chart.markerSize">3</option> but when I try increase the marker size it does not work. If anyone has a solution, it would be greatly appreciated. Thanks.
Hello,  I have been trying to present Splunk dashboards through frames to my website, and it's not working as you can see in the picture below, I have seen other people with the same question but n... See more...
Hello,  I have been trying to present Splunk dashboards through frames to my website, and it's not working as you can see in the picture below, I have seen other people with the same question but no solution was suggested that worked for me, if I tried to present the reports, they work fine, but the dashboards don't seem to work. This is my code:      <iframe src="https://MY_SPLUNK_SERVER/en-US/app/MY_APP_NAME/DASHBOARD_NAME?embed=true" width="100%" height="550px" frameborder="0" scrolling="no" ></iframe>         Clearing the cookies & restarting the browsers didn't work, neither did trying  a different browser. Does anyone know how to solve this? or if it's not even possible?
Hello everyone,  I have events which contains such fields user1=..., user2=...., user3... etc And I have lookup which have column "user" where located all users.    
I have a string like below and unable to extract accuratly with rex command please suggest  any alternative way. _raw---------------- {lable:harish,message: Say something, location:India, state:T... See more...
I have a string like below and unable to extract accuratly with rex command please suggest  any alternative way. _raw---------------- {lable:harish,message: Say something, location:India, state:TS,qual:xyz} {message: say nothing,lable:harish, location:India, state:TS,qual:xyz} {lable:harish, location:India, state:TS,qual:xyz,message: say splunk splunk answers}   The message value is randomized location and I need to pick the message value.   When I try with below command it is not considering the proper end position. Please suggest |rex "message:(?<Message_value>.*)[,|}]" |table Message_Value