All Topics

Top

All Topics

I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usag... See more...
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usage. I could not find anything about this in the logs. Could someone help me to understand relationship between disk quota utilization vs triggered alert retention period?   
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table ... See more...
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table with Identity and combi_fields. In combi fields i want to check for a data whether it is same in all the mutivalued data for a given Identity. For example, Identity                                  combi_fields ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   abcdefg - 113 - 110 - Passed - folder1- folder2 In the above example all the 1st data is same. If it is same i have to consider the greatest number and give its status as output. Like ABC abcdefg  Passed there might be different data in the 1 st place like below ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   xyzabc- 113 - 110 - Passed - folder1- folder2                                                   xyzabc- 201 - 219- Passed - folder1- folder2 Here is hould show as ABC abcdefg Passed                                              ABC xyzabc Passed.   How can i do this? How can i compare among a field?  
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For ... See more...
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For reference I've tried to use that endpoint and match it with saved searches only (reports) and not to return any alerts.  But the data returned has a lot more than expected as the number in the "reports" tab under "all apps" is a lot smaller than the number returned from the REST call   Any help or link to docs would be appreciated  
Why this addon is not supported anymore? Is there any other alternative for OT/ICS data?  
Hello Team, I had followed steps mentioned in below page for migration to Splunk Enterprise version 9.2.1: Upgrade to version 9.2 on UNIX - Splunk Documentation I receive below error on running st... See more...
Hello Team, I had followed steps mentioned in below page for migration to Splunk Enterprise version 9.2.1: Upgrade to version 9.2 on UNIX - Splunk Documentation I receive below error on running start command. Due to this error, I am unable to complete the migration on Splunk indexer machine. Warning: cannot create "/data/splunk/index_data" Creating: /data/splunk/index_data ERROR while running renew-certs migration.
Hello there, I also want to render splunk app's dashboard on my website securely, is there any way to render splunk app's dashboard on my web site, i have successfully access an existing dashboard X... See more...
Hello there, I also want to render splunk app's dashboard on my website securely, is there any way to render splunk app's dashboard on my web site, i have successfully access an existing dashboard XML definition as per follow this guideline data/UI/views/{name}. Thanks for your support.
Hi All, I have a query which returns results for a particular month like how many tickets breached SLA. The month and year is hardcoded to the query. Now, I am wanting not to hard code the month in ... See more...
Hi All, I have a query which returns results for a particular month like how many tickets breached SLA. The month and year is hardcoded to the query. Now, I am wanting not to hard code the month in the query, instead use it in output - so that user can select the month to get the results. Could you please help here? Query Results: TicketCountSLABreached(TCSB)  TotalTicketCount(TTC)  IncResolutionTime(TCSB/TTC*100)    TimeStamp 2                                                                    3                                              66.667                                                             February 2024
Hello Splunkers! In the Security Posture by default there are no filters that would allow us to adjust the time, meaning, we see the summary about notable events over the last 24 hours. I want to ... See more...
Hello Splunkers! In the Security Posture by default there are no filters that would allow us to adjust the time, meaning, we see the summary about notable events over the last 24 hours. I want to change that, I have added a time picker that I would like to bind to one dashboard in the security posture - "Key indicators" so that I can see for example the summary of notable events over the last 12 hours or 7 days. Can someone please explain what needs to be done on time picker or dashboard in order to achieve this, or maybe is there an easier way to do this?  Thanks for taking your time reading and replying to my post
| inputlookup E.csv | search 4Let="ABCD" | stats count as count3 [search index=xyz category="Ad" "properties.OnboardingStatus"= Onboarded | dedup properties.DeviceName | rename properties.DeviceName... See more...
| inputlookup E.csv | search 4Let="ABCD" | stats count as count3 [search index=xyz category="Ad" "properties.OnboardingStatus"= Onboarded | dedup properties.DeviceName | rename properties.DeviceName as DeviceName | stats count as count2] this search is giving error
I would like to have an investigation created with a notable event recorded in there using the API. I've been trying to achieve this by adding a notable event to an ES investigation using the API.  ... See more...
I would like to have an investigation created with a notable event recorded in there using the API. I've been trying to achieve this by adding a notable event to an ES investigation using the API.  So far I have been able to create an investigation and then add an artifact to it using the API. Next step I need to complete is to insert a notable event into an ES investigation using the API.    Alternatively if its possible to create an investigation from a notable using the API then I would also be happy with that option.
Hi, For the migration of data we need to use Smart Store from splunk Please help us to understand the below pointers: Smart Store is available for on prem implementation. Costing How do you siz... See more...
Hi, For the migration of data we need to use Smart Store from splunk Please help us to understand the below pointers: Smart Store is available for on prem implementation. Costing How do you size the solution?
In Python script I get a below error in internal logs TypeError: Object of type bytes is not JSON serializable We are using python 3 May I know how to get rid of this error in internal logs?... See more...
In Python script I get a below error in internal logs TypeError: Object of type bytes is not JSON serializable We are using python 3 May I know how to get rid of this error in internal logs?  
Hi Splunkers..  on linux when i try to do wget linux download, it says download.splunk.com is not trusted.  Could you pls check it, thanks.    Best Regards Sekar 
Configuring Log Observer, getting error: Unable to create Splunk Enterprise Cloud client. Invalid or incorrect splunkenterprisecloud certificate following these instructions: https://app.us1.signa... See more...
Configuring Log Observer, getting error: Unable to create Splunk Enterprise Cloud client. Invalid or incorrect splunkenterprisecloud certificate following these instructions: https://app.us1.signalfx.com/#/logs/connections/enterpriseCloud/new
I have been asked to create a dashboard for our threat hunters and would like some ideas. They want to know what they can breach off of webservers.  So far I have a table with just host we have. I... See more...
I have been asked to create a dashboard for our threat hunters and would like some ideas. They want to know what they can breach off of webservers.  So far I have a table with just host we have. I also have a table with http response counts. 
Cisco AppDynamics modernizes self-hosted observability with a new virtual appliance that unlocks new AI-powered intelligence for anomaly detection and root cause analysis, application security, SAP s... See more...
Cisco AppDynamics modernizes self-hosted observability with a new virtual appliance that unlocks new AI-powered intelligence for anomaly detection and root cause analysis, application security, SAP solution monitoring, and AppDynamics Flex licensing. IT Operations teams can now detect anomalies faster and with greater accuracy, protect against security vulnerabilities and attacks, and maintain the performance of SAP applications and business processes.  You deploy this self-hosted solution as an OVA as a virtual appliance on vSphere – other platform support coming right around the corner, such as VHD, AMI, and KVM!  This means:   Simplifies deployments – everything you need is packaged neatly together in one downloadable OVA streamlining deployments.  Reduces compatibility challenges – the different service visions, i.e. Enterprise Console, Controller, EUM, and Events Service, are maintained during installation and upgrades, reducing concerns about compatibility so reducing the strain on maintenance.   Offers consistency across different computing environments – leveraging existing infrastructure ensures operations teams are already familiar with deployment and maintenance scenarios.   This means streamlined deployments with less complexity!  Many large corporations are not ready to, or are unable to, adopt cloud native solutions. But, when it comes to supporting those who are using self-hosted solutions, including those who have private clouds, namely those in the financial industry looking to protect sensitive information, healthcare concerned about sensitive electronic medical records, or those in the public sector with data residency requirements,  there still is a need for modern tools, such as those that leverage AI-powered intelligence and focus on areas of security.  Modern services are needed regardless of deployment type!    What are these new exciting modern services? Cognition Engine which uses machine learning and AI-powered intelligence to establish dynamic baselines to understand what “normal” application performance looks like so alerts are more precise with anomaly detection driving faster root cause analysis.    Speeding root cause analysis with issue suggestions Cisco Secure Application which reduces the risk of security exposure without compromising delivery speed for application performance monitoring (APM)-managed applications by detecting security threats and providing guidance on remediation based on business priorities and risk exposure.      Identify security threats for business transactions  What other features are important to note with this release?   Monitor SAP solutions better enable IT operations teams to build resiliency into SAP landscapes that span SAP and non-SAP environments, in key areas such as service availability, process dashboards, ABAP code-level visibility, and SAP related security areas. The AI-powered intelligence improves analyzing the Java stack (SAP Portal, SAM MI).  Cisco AppDynamics Flex Licensing is also new, enabling on-premises customers with the ability to seamlessly shift licenses for use with commercial SaaS features for broader full-stack observability capabilities while also providing faster access to APM features through automated controller updates. Flex Licensing is meant to offer flexibility to those looking to transition some of their services to SaaS while reducing the burden of having to maintain components without incurring additional costs.   What if I already have an existing On-Premises deployment?  We considered those who need a new deployment and those who are already using on-premises services. The good news is that the deployment is the same. For those using an existing on-premises controller, the existing agents that are sending their data just need the target controller changed to the new one. The new virtual appliance deployment will communicate with the existing deployment, thus providing additional modern services while retaining existing APM correlation. Pretty slick!   Regardless of net new, or integrating with an existing on-prem deployment, the pre-requisites are:  Three virtual machines ranging from small: 16vCPU/64GB RAM/500GB disk to medium: 32vCPU/ 128GB RAM/ 3TB disk.  Controller version 24.2.2 (for existing controller compatibility)  VMware vSphere Client 7.x or higher (other hosts are on the roadmap and will be coming soon – check the documentation for the latest)    The high-level steps for deployment are:  Download the OVA from the AppDynamics download portal.  Deploy the three VMs using the downloaded OVA in vSphere Client (you will need to input host names, host IP, DNS, and gateway addresses, and an optional domain name).  Create a Kubernetes (MicroK8s) cluster of those three VMs.  Install the infrastructure and application services (a simple appdcli start command) supporting HA mode (high availability).  Verify service health with another simple appdcli command.  The detailed steps are found in our documentation: https://docs.appdynamics.com/appd/onprem/24.x/24.4/en/cisco-appdynamics-self-hosted-virtual-appliance  We are bringing faster time to value by offering a better on-boarding experience that offers modern tools and security services.  Please read the following blog or attend the May 29th webinar for more details and reach out to your sales team for more information.    Additional Resources Blog: Cisco AppDynamics modernizes Self-Hosted Observability for hybrid application monitoring  Upcoming webinar: Cisco Unlocks AI-Powered Intelligence for Self-Hosted Observability AppDynamics Documentation: https://docs.appdynamics.com/appd/onprem/24.x/24.4/en/cisco-appdynamics-self-hosted-virtual-appliance  
the universal forwarder does not parse data except in certain limited situations. can anyone tells what are these situations?
Hi Splunkers, I'm deploying a new Splunk Enterprise environment; inside it, I have (for now) 2 HF and a DS. I'm trying to set an outputs.conf file on both HF via DS; clients perform a correct phonin... See more...
Hi Splunkers, I'm deploying a new Splunk Enterprise environment; inside it, I have (for now) 2 HF and a DS. I'm trying to set an outputs.conf file on both HF via DS; clients perform a correct phoning to DS, but then apps are not downloaded. I checked the internal logs and I got no error related to app. I followed doc and course material used during Architect course for references. Below, configuration I made on DS. App name:      /opt/splunk/etc/deployment-apps/hf_seu_outputs/       App file     /opt/splunk/etc/deployment-apps/hf_seu_outputs/default/app.conf [ui] is_visible = 0 [package] id = hf_outputs check_for_updates = 0       /opt/splunk/etc/deployment-apps/hf_seu_outputs/local/outputs.conf [indexAndForward] index=false [tcpout] defaultGroup = default-autolb-group forwardedindex.filter.disable = true indexAndForward = false [tcpout:default-autolb-group] server=<idx1_ip_address>:9997, <idx2_ip_address>:9997, <idx3_ip_address>:9997     serverclass.conf:   [serverClass:spoke_hf:app:hf_seu_outputs] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:spoke_hf] whitelist.0 = <HF1_ip_address>, <HF1_ip_address>   File and folder permission are right, owner is the user used to execute Splunk (in a nutshell, the owner of /opt/spluk). I suppose it is a very stupid issue, but I'm not able to figured it out.
HI, I would like to participate in your dashboard challenge but I get the error: You do not have sufficient privileges for this resource or its parent to perform this action. Click your browser'... See more...
HI, I would like to participate in your dashboard challenge but I get the error: You do not have sufficient privileges for this resource or its parent to perform this action. Click your browser's Back button to continue. Can you help me?
Hi All, Below query to get stats sum of field values of latest correlationId. need to show in pie chart. But i am getting values as other.PFA screenshot   index="mulesoft" *Upcoming Executions* c... See more...
Hi All, Below query to get stats sum of field values of latest correlationId. need to show in pie chart. But i am getting values as other.PFA screenshot   index="mulesoft" *Upcoming Executions* content.scheduleDetails.lastRunTime="*" [search index="mulesoft" *Upcoming Executions* environment=DEV | stats latest(correlationId) as correlationId | table correlationId|format]|rename content.scheduleDetails.lastRunTime as LastRunTimeCount | stats count(eval(LastRunTimeCount!="NA")) as LastRunTime_Count count(eval(LastRunTimeCount=="NA")) as NA_Count by correlationId| stats sum(LastRunTime_Count) as LastRunTime_Count,sum(NA_Count) as NA_Count