All Topics

Top

All Topics

Hi splunkers  Why when I do the following query if it gives me the correct data   Query | inputlookup append=t mitre_lookup | foreach TA00* [ | lookup mitre_tt_lookup technique_id as <<FIE... See more...
Hi splunkers  Why when I do the following query if it gives me the correct data   Query | inputlookup append=t mitre_lookup | foreach TA00* [ | lookup mitre_tt_lookup technique_id as <<FIELD>> OUTPUT technique_name as <<FIELD>>_technique_name | eval <<FIELD>>_technique_name=mvindex(<<FIELD>>_technique_name, 0) ] | eval codes_tech = "T1548, T1134,T1547" | makemv delim=", " codes_tech | eval TA0004 = if(mvfind(codes_tech, TA0004) > -1, TA0004." Es aqui", TA0004) Result:   But when the data comes from a stats result it doesn't search for the values:   Query: index=notable search_name="Endpoint - KTH*" |fields technique_mitre |stats count by technique_mitre |eval tech_id=technique_mitre |rex field=tech_id "^(?<codes_tech>[^\.]+)" |stats count by codes_tech |makemv delim=", " codes_tech |mvexpand codes_tech |stats count by codes_tech | inputlookup append=t mitre_lookup | foreach TA00* [ | lookup mitre_tt_lookup technique_id as <<FIELD>> OUTPUT technique_name as <<FIELD>>_technique_name | eval <<FIELD>>_technique_name=mvindex(<<FIELD>>_technique_name, 0) ] | eval codes_tech = codes_tech | eval TA0004 = if(mvfind(codes_tech, TA0004) > -1, TA0004." Es aqui", TA0004) Result: I would really appreciate your support
Hello, Currently, SPLUNK is installed in one of my AWS EC2 Instances.  It's a free 60-day trial version, for my personal use to test and do some research. I have done a lot of tasks including some r... See more...
Hello, Currently, SPLUNK is installed in one of my AWS EC2 Instances.  It's a free 60-day trial version, for my personal use to test and do some research. I have done a lot of tasks including some research work within that SPLUNK platform. Are there any ways I can convert this trial version to paid License version, so I can continue using this SPLUNK platform after 60 days? Thank you so much!  
Hello, Do we have any SPLUNK recommended maximum size of a single source file for UFs to push? I know maximus size of Lookup is 500MB. But for SPLUNK UF based data ingestion, I have a few source fi... See more...
Hello, Do we have any SPLUNK recommended maximum size of a single source file for UFs to push? I know maximus size of Lookup is 500MB. But for SPLUNK UF based data ingestion, I have a few source files need to be ingested every day using UF and each of the size of source files is around 2.2 GB. Do you have any recommendations? Thank you so much.
I have 2 lookup files as lookup1.csv and lookup2.csv lookup1.csv has the data as below name, designation, server, ipaddress, dept tim, ceo, hostname.com, 1.2.3.5, alldept jim, vp, myhost.com, 1... See more...
I have 2 lookup files as lookup1.csv and lookup2.csv lookup1.csv has the data as below name, designation, server, ipaddress, dept tim, ceo, hostname.com, 1.2.3.5, alldept jim, vp, myhost.com, 1.0.3.5, marketing pim, staff, nohost.com, 4.0.4.8, hr lookup2.csv has the data as below cidr, location 1.2.3.0/24, dc 1.0.3.0/24, carolina 3.4.7.0/24, tx I would like to lookup for the field ipaddress in lookup1.csv with the field cidr in lookup2.csv for the first 3 digits as in x.x.x and get the location field if they match. If the ipaddress doesn't match the first 3 digit of cidr , the location should be marked as "unknown". Expected o/p tim, ceo,1.2.4.5, dc jim, vp, 1.0.3.5, carolina pim, staff, 4.0.4.8, unkown I am looking for the search command in splunk using the 2 lookup tables. Thanks in advance. My search so far has not yield any good results but I am still working on it.
  How does AppDynamics support public sector cloud migration to cloud services?   In the Cisco AppDynamics Blog In this post, Seth Price discusses how public sector agenc... See more...
  How does AppDynamics support public sector cloud migration to cloud services?   In the Cisco AppDynamics Blog In this post, Seth Price discusses how public sector agencies can leverage the real-time performance visibility of APM to gain the operational benefits of a multi-cloud strategy, all while ensuring their users aren’t impacted before, during, and after migrations to cloud services. Seth gives the overview public sector agencies will need to prepare for an effective and pain-free migration, spelling out how AppDynamics APM can help.  Like any major strategic initiative, cloud migration requires careful planning and close attention to complex details. Cisco AppDynamics can help remove anxiety and uncertainty from the process by providing the visibility and intelligence you need most. In doing so, you can migrate your mission-critical applications to the environment you choose, with confidence. The full post is about an 8-minute read. Additional resources Cisco AppDynamics cloud migration page Ultimate guide to hybrid cloud monitoring e-book
Where do I learn about business observability, why it's necessary, and how to use it in my org's digital transformation journey? New to Cisco AppDynamics or want to understand how it can help you... See more...
Where do I learn about business observability, why it's necessary, and how to use it in my org's digital transformation journey? New to Cisco AppDynamics or want to understand how it can help your IT team deliver even more business value?  Join our introductory webinar to explore how you can use the Cisco AppDynamics portfolio of solutions to achieve full-stack observability.  Webinar | An introduction to Cisco AppDynamics You’ll learn:  What business observability is and why business context is now an IT imperative.  How to accelerate digital transformation with business observability.  Ways Cisco AppDynamics helps companies overcome modern observability challenges.  Session Schedule AMER: August 23, 2023 at 11 a.m. PST / 2 p.m. EST APAC: August 24, 2023 at 8:30 a.m. IST / 11 a.m. SGT / 2 p.m. AEST Presenters Alisha Patil is a Sales Lead Development Representative at Cisco AppDynamics. She has over two years of experience working with top commercial and enterprise accounts that want to use Cisco AppDynamics. Alisha is passionate about educating potential customers to help their companies ensure ideal customer experience and increase operational agility and effectiveness. Aaron Schifman is a Senior Technical Product Marketing Manager at Cisco AppDynamics, with over two decades of experience thriving in challenging globally based environments as an engineer, technical product marketing manager, pre-sales consultant, and professional services leader. Having worked with Elastic and Dell EMC in highly technical customer-facing roles, Aaron brings a passion for articulating the role of AppDynamics in helping customers overcome their most pressing business challenges. Register for "An introduction to Cisco AppDynamics" here
OCSF, Amazon Security Lake and Splunk Watch Now Amazon recently announced the General Availability of Security Lake (ASL), a new data lake offering in AWS to store and query security data fr... See more...
OCSF, Amazon Security Lake and Splunk Watch Now Amazon recently announced the General Availability of Security Lake (ASL), a new data lake offering in AWS to store and query security data from both AWS and non-AWS data sources. Notably, data stored in ASL is required to be in Open Cybersecurity Schema Framework (OCSF) format. But what's behind the marketing messaging and this offering? Join this session from Security Field Solutions to get a technical overview on OCSF, Amazon Security Lake, how they integrate with Splunk today and where things are heading. Watch now to learn about: The new Amazon Security Lake offering in AWS The Open Cybersecurity Schema Framework (OCSF) Support for OCSF and Security Lake in Splunk
i'm trying to grab all items based on a field. the field is a "index" identifier from my data. but i only want the most recent one in my dashboard. Since eval doesn't have a max function ... e.g.  ... See more...
i'm trying to grab all items based on a field. the field is a "index" identifier from my data. but i only want the most recent one in my dashboard. Since eval doesn't have a max function ... e.g.   eval max_value = max(index) | where index=max_value   is eventstats the only way to do this? These seems like a lot of overhead vs just getting a max value   eventstats max(index) as max_value | where index=max_value   is there another way to do this betteR?
From the below sample logs we need rex for  1. "appl has successfully completed all authentication flows." 2. "Login complete"   2023-01-11 12:34:22,678 DITAU [taskstatus-8] {DIM: hgtijNMHy67v | ... See more...
From the below sample logs we need rex for  1. "appl has successfully completed all authentication flows." 2. "Login complete"   2023-01-11 12:34:22,678 DITAU [taskstatus-8] {DIM: hgtijNMHy67v | DIS: aHJyikjI | DIC: HY56GFT6 | PI: 678.89.987 | ND: ks | APP: |OS: iOS-1-7.1 | REV" } APPDA - 87687654356789 - appl has successfully completed all authentication flows. 2023-01-11 12:34:22,678 DITAU [taskstatus-8] {DIM: hgtijNMHy67v | DIS: aHJyikjI | DIC: HY56GFT6 | PI: 678.89.987 | ND: ks | APP: |OS: iOS-1-7.1 | REV" } APPDA - 87687654356789 - appl has successfully completed all authentication flows. 2023-01-11 12:34:22,678 INFO [taskstatus-8] {DIM: hgtijNMHy67v | DIS: aHJyikjI | DIC: HY56GFT6 | PI: 678.89.987 | ND: ks | APP: | Req: POST /ai/v2/api/assert | OS: iOS-16.6 | VER: 23.4.0} AUDIT - Login complete 2023-01-11 12:34:22,678 INFO [taskstatus-8] {DIM: hgtijNMHy67v | DIS: aHJyikjI | DIC: HY56GFT6 | PI: 678.89.987 | ND: ks | APP: | Req: POST /ai/v2/api/assert | OS: iOS-16.6 | VER: 23.4.0} AUDIT - Login complete  
Introducing Edge Processor Need more visibility into data in motion? Want more control over data transformations? Look no further - Splunk Edge Processor is Splunk’s latest innovation in data pr... See more...
Introducing Edge Processor Need more visibility into data in motion? Want more control over data transformations? Look no further - Splunk Edge Processor is Splunk’s latest innovation in data processing. Available in Splunk Cloud Platform, customers have new abilities to easily filter, mask, route and transform data close to the data source. These data transformations are powered by the next generation of Splunk Search Processing Language (SPL2) which simplifies data preparation and search. With Splunk Edge Processor, customers can derive more value from and gain more insight into their data, with less toil. Key Takeaways: Understand Splunk Edge Processor and its architecture Learn how Splunk Edge Processor fits in with the rest of your Splunk Cloud Platform feature set Identify key uses cases for leveraging Splunk Edge Processor so you can reclaim license for higher value needs Watch this Tech Talk to learn about how Splunk Edge Processor can support use cases in your environment.  
Hello Splunkers we recently upgraded our splunk distributed deployment from 8.2.9 to 9.0.5.1. After upgrade our splunk servers are started show under unknown category.  it's pretty much impacti... See more...
Hello Splunkers we recently upgraded our splunk distributed deployment from 8.2.9 to 9.0.5.1. After upgrade our splunk servers are started show under unknown category.  it's pretty much impacting all the dashboards in monitoring console. below is the error  "Streamed search execute failed because: Error in 'prerest' command: You do not have a role with the rest_access_server_endpoints capability that is required to run the 'rest' command with the endpoint=/services/server/status/limits/search-concurrency?count=0. Contact your Splunk administrator to request that this capability be added to your role.." please let me know what can i do to bring the views back to normal.    Thanks in advance
Good afternoon everyone,   I'm a fairly new Splunk user so apologies for anything I miss while writing this up. For some reason our dashboard for the Q-Audit App, Qmulos, is no longer working. The ... See more...
Good afternoon everyone,   I'm a fairly new Splunk user so apologies for anything I miss while writing this up. For some reason our dashboard for the Q-Audit App, Qmulos, is no longer working. The dashboard used to while processing auditing changes for the last 7 seven days, would at least show the data that was already processed while loading the rest of the week. Now while searching it will only show 0 of however many events matched, until eventually resulting in no results found. I cannot even use the query to find the old data from weeks ago when it did work successfully. The dashboard was created by another user who no longer works here. I tried cloning the dashboard myself to see if it was possibly a permissions issue but that did not resolve it. The dashboard itself was essentially auditing users initializing applications in a graph of who initialized what application and how they did so. I cannot think of any possibly changes we made that would cause this.  Dashboard query: | tstats prestats=true summariesonly=false allow_old_summaries=false count as "count(Process)" FROM datamodel=Q_Application_State WHERE (nodename=Application_State.tag"=*) BY _time span=1s, host, "Application_State.process", "Application_State.src_user", "Application_State.user" | stats dedup_splitvals=t count AS "count(Process)" by _time, host, Application_State.process, Application_State.src_user, Application_State.user
Did you know that Splunk Education offers more than 60 absolutely FREE training and education courses? Whether you're a brand spankin' newbie or a seasoned Splunk user looking to branch out or brush ... See more...
Did you know that Splunk Education offers more than 60 absolutely FREE training and education courses? Whether you're a brand spankin' newbie or a seasoned Splunk user looking to branch out or brush up on your knowledge, there's a little something for everyone. Check out the full roster of free courses over on Splunk Education! And want to stay in the know on even more training, certification, and education news? Make sure you subscribe to the indexEducation newsletter!
I have a HF that was recently expanded in terms of its RAM capacity. Ever since, there has been an issue with REST commands off specific endpoint. Example: | rest splunk_server=splunk-hf1 /services... See more...
I have a HF that was recently expanded in terms of its RAM capacity. Ever since, there has been an issue with REST commands off specific endpoint. Example: | rest splunk_server=splunk-hf1 /services/server/info (Works great and gives me the expected results) | rest splunk_server_group=dmc_group_* /services/server/status/resource-usage/hostwide (When I run it it acts like some of my servers, including a HF has 0 memory)
I have a search that takes quite some time to run. *using py to run the search with splunk api   it returns by saying it "role-based disk usage quita of search artifacts for user "test" has been r... See more...
I have a search that takes quite some time to run. *using py to run the search with splunk api   it returns by saying it "role-based disk usage quita of search artifacts for user "test" has been reached (used 192mb quota=100mb)  how can I increase the disk quota?
Apologies, but I am new to Splunk and am looking for a little bit of guidance/help. I am having an issue with one of our apps.  For whatever reason when I try going AdvancedSearch/Macros from wit... See more...
Apologies, but I am new to Splunk and am looking for a little bit of guidance/help. I am having an issue with one of our apps.  For whatever reason when I try going AdvancedSearch/Macros from with the Network app, the url changes to /data/macros and I get a 404 error. However, if I manually change it to /admin/macros, I can get in just fine and edit/view macros. It is only happening under one specific app. Other apps are able to use /data/macros just fine.  I think it might be permission related, but I am unsure where to check or what to look for.  It is like this older thread, but I'm not sure where to start looking: https://community.splunk.com/t5/Knowledge-Management/Editing-macro-is-giving-a-404-error/m-p/433712/highlight/true 
Hi, Just wanted to know Ad account activity who deleted user account ?
I created a search to list servers and the last time a windows log reported.  command i am using is  Tstats latest(_time) as lastseen where (index=windows) by host | convert ctime(lastseen)  ... See more...
I created a search to list servers and the last time a windows log reported.  command i am using is  Tstats latest(_time) as lastseen where (index=windows) by host | convert ctime(lastseen)   I am trying to compare that  “last seen” to current time and if more than 24 hrs then I would like to alert. Any thoughts on how to identify and alert on that. Ultimately would love adding this to a dashboard.
I need to get the  sourcetype count by each source top 10 events counts in splunk Example :  I have 3 sourcetype and sending data from different sources,  sourcetype A - a,b,c,d,e sourcetype B -a... See more...
I need to get the  sourcetype count by each source top 10 events counts in splunk Example :  I have 3 sourcetype and sending data from different sources,  sourcetype A - a,b,c,d,e sourcetype B -a,b,c,d,e sourcetype C -a,b,c,d,e Now, I need to display top 10 event count by each source by sourcetype
I have the below error showing on the search head, I've been looking for a cause of this error with no luck. Unable to initialize modular input "es_identity" defined in the app "SplunkEnterpriseSecu... See more...
I have the below error showing on the search head, I've been looking for a cause of this error with no luck. Unable to initialize modular input "es_identity" defined in the app "SplunkEnterpriseSecuritySuite": Introspecting scheme=es_identity_export: script running failed (exited with code 1) Post-Install Configuration gives Error: Fetch failed:admin/ess_configured/ssl I'm new to SPLUNK, thank you.