All Topics

Top

All Topics

I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment pr... See more...
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment process of Splunk, by building a standalone instance. I get to a point where I think I have completed all the steps necessary to have a functioning Splunk set up. (connections are established on 8089 and 9997) and my web page is good. As soon as my apps are pushed to my (client)  this is when Splunk starts throwing an error stating indexers and ques are full. it also appears I am getting no logs from my applications. Any help is greatly appreciated. 
I am using the below query to merge 2 queries using append. However, I am unable to get the value of the field named "Code" from the first query under | search "Some Logger" printed in the Statistics... See more...
I am using the below query to merge 2 queries using append. However, I am unable to get the value of the field named "Code" from the first query under | search "Some Logger" printed in the Statistics section: index=* sourcetype=* host=* | search "Some Logger" | rex "LoggerName\|(?<time>\w+)\|(?<Service>\w+)\|(?<Type>\w+)\|(?<brand>\w+)\|(?<template>\w+)\|(?<hashId>[\w-]+)\|(?<Code>\w+)" | table Code | append [ search host=* | search "LoggerName2*" | rex field=_raw "field1=(?<field1>)\}" | rex field=_raw "field2=(?<field2>)," | rex field=_raw "field3=(?<field3>[a-zA-z-_0-9\\s]*)" | rex field=_raw "(?<field4>[\w-]+)$" | rex field=_raw "field5=(?<field5>)," | rex field=_raw "field6=(?<field6>)," | table field1,field2 ] The result from the 2nd/child query i.e. | search "LoggerName2*" is printing just fine in a tabular format. Value of the code field is API response code i.e. can be either 2XX, 3XX, 4XX, 5XX. Could someone please help ? Thanks!
I have raw data like:     Error=REQUEST ERROR | request is not valid.|","time":"1707622073040"     and I want to extract "REQUEST ERROR | request is not valid." to a new field, so I try to use ... See more...
I have raw data like:     Error=REQUEST ERROR | request is not valid.|","time":"1707622073040"     and I want to extract "REQUEST ERROR | request is not valid." to a new field, so I try to use rex to match until |" with below query but it still only returns "REQUEST ERROR"     |rex field=_raw "Error\=(?<ErrDesc>[^|\"]+)"      
I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and access the vo... See more...
I am trying to script the installation for the Mac Splunk Universal Forwarder package.  The package is a disk image (.dmg). I understand that we can mount the image using hidutil and access the volume to find the .pkg file.  The issue comes from where we attempt to run installer pkg the end user is prompted to answer dialog boxes, which we do not want to occur.   Is there a switch to use to install the extracted pkg or dmg file silently to install the app on Mac OS Machine ?
I see a lot of deprecated errors in _internal index. How can this error be resolved ?
Is it possible to use something like this: GitHub - okfse/sweden-geojson: Tiny GeoJSON files of Sweden's municipalities and regions or this: GitHub - perliedman/svenska-landskap: Sveriges landskap... See more...
Is it possible to use something like this: GitHub - okfse/sweden-geojson: Tiny GeoJSON files of Sweden's municipalities and regions or this: GitHub - perliedman/svenska-landskap: Sveriges landskap som öppen geodata i GeoJSON With Splunk? If so, are there any manuals/instructions/blog posts etc you could point me to describing how to achieve this? Best regards
I have a number of devices that send logs to Splunk. I want to know when devices stop logging. For this example search: index="mydevices" logdesc="Something that speeds the search" | top limit=40 ... See more...
I have a number of devices that send logs to Splunk. I want to know when devices stop logging. For this example search: index="mydevices" logdesc="Something that speeds the search" | top limit=40 devicename How can i find "devicename"s that have logged in the last week that haven't logged in the last 30 minutes? if that makes sense. Iain.
"I need to create a dashboard with two queries in one dashboard, one query having a fixed time range of "Today" and the other query needs to select "earliest and latest" from the drop down. The data ... See more...
"I need to create a dashboard with two queries in one dashboard, one query having a fixed time range of "Today" and the other query needs to select "earliest and latest" from the drop down. The data dropdown will have two values "Yesterday" and "last week". Last week is the day from last week (if today is Feb 13, last week should show data from Feb Feb 06)" for.eg  index="abc" sourcetype="Prod_logs" | stats count(transactionId) AS TotalRequest (***earliest and latest needs to be derived as per user selection from drop down) | appendcols [search index="abc" sourcetype="Prod_logs" earliest=@d  latest=now (****Today's data****) | stats count(transactionId) AS TotalRequest]      
Hi All,    I am trying to pass time variables to the search when I click on a value in drilldown dashbaord. Below is the the source of the dashboard    <form version="1.1"> <label>test12</lab... See more...
Hi All,    I am trying to pass time variables to the search when I click on a value in drilldown dashbaord. Below is the the source of the dashboard    <form version="1.1"> <label>test12</label> <fieldset submitButton="false"> <input type="time" token="field1"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>test12</title> <table> <search> <query>index=_internal status=* sourcetype=splunkd |lookup test12 name AS status OUTPUT value | stats count by value</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown target="_blank"> <set token="drilldown_srch">index=_internal status=* sourcetype=splunkd |lookup test12.csv name as status output value | where value=$row.value$</set> <link>search?q=$drilldown_srch|u$</link> </drilldown> </table> </panel> </row> </form> I tried adding the time variables in the link as below but no luck <link>search?q=$drilldown_srch?earliest=$field1.earliest&latest=$field1.latest$|u$</link> Thanks
Hello, this app was working fine for me until I updated to Splunk Enterprise 9.1.2, whereupon the urllib library keeps making errors where it does not understand HTTPS. From some rudimentary googling... See more...
Hello, this app was working fine for me until I updated to Splunk Enterprise 9.1.2, whereupon the urllib library keeps making errors where it does not understand HTTPS. From some rudimentary googling, it appears this may be related to the Splunk python urllib library not being compiled to use SSL. Would it be possible to refactor this app to use the http request helper functions?             bash-4.2$ /opt/splunk/bin/python3 getSplunkAppsV1.py Traceback (most recent call last): File "getSplunkAppsV1.py", line 92, in <module> main() File "getSplunkAppsV1.py", line 87, in main for app_json in iterate_apps(app_func): File "getSplunkAppsV1.py", line 76, in iterate_apps data = get_apps(limit, offset, app_filter) File "getSplunkAppsV1.py", line 35, in get_apps data = json.load(urllib.request.urlopen(url)) File "/opt/splunk/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/opt/splunk/lib/python3.7/urllib/request.py", line 525, in open response = self._open(req, data) File "/opt/splunk/lib/python3.7/urllib/request.py", line 548, in _open 'unknown_open', req) File "/opt/splunk/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/opt/splunk/lib/python3.7/urllib/request.py", line 1420, in unknown_open raise URLError('unknown url type: %s' % type) urllib.error.URLError: <urlopen error unknown url type: https>         (The same error is produced when I use python version 2)
Hi,  I created a column chart in Splunk that shows month but will like to also indicate the day of the week for each of those months Sample query ------------------- index=_internal | bucket _... See more...
Hi,  I created a column chart in Splunk that shows month but will like to also indicate the day of the week for each of those months Sample query ------------------- index=_internal | bucket _time span =1d |eval month=strftime(_time,"%b") | eval day=strftime(_time,"%a") | stats avg(count) as Count max(count) as maximum by month, day
Query: index=abc mal_code=xyz TERM(application) OR (TERM(status) TERM(success)) NOT (TERM(unauthorized) TERM(time) TERM(mostly)) site=SOC |stats count by Srock |stats sum(count) as Success |appendco... See more...
Query: index=abc mal_code=xyz TERM(application) OR (TERM(status) TERM(success)) NOT (TERM(unauthorized) TERM(time) TERM(mostly)) site=SOC |stats count by Srock |stats sum(count) as Success |appendcols [search index=abc mal_code=xyz (TERM(unauthorized) TERM(time) TERM(mostly)) NOT (TERM(status) TERM(success)) site=SOC |stats count by ID |fields ID |eval matchfield=ID |join matchfield [search index=abc mal_code=xyz site=SOC "application" |stats count by Srock |fields Srock |eval matchfield=Srock] |stats count(matchfiled) as Failed] |eval Total=Success+Failed |eval SuccessRate=round(Success/Total*100,2) |table * From the above query i am getting data only for one site. but I want data for both sites SOC and BDC. I tried giving  as site=* its not working Any help would be appreciated.
I wasn't sure if having multiple different license managers would cause any violations.  Ideally we really do not like the idea of having a single point of failure for our license manager, and are lo... See more...
I wasn't sure if having multiple different license managers would cause any violations.  Ideally we really do not like the idea of having a single point of failure for our license manager, and are looking to implement redundancy.  Is this possible or will it cause issues?
Hi there! How are you doing? Our FIM tool is detecting modifications to the /etc/passwd file by the splunkfwd user found on some of our critical Linux servers for our operation with Splunk Universa... See more...
Hi there! How are you doing? Our FIM tool is detecting modifications to the /etc/passwd file by the splunkfwd user found on some of our critical Linux servers for our operation with Splunk Universal Forwarder installed. Do you know if this behavior is correct? Shouldn't it be modifying /opt/splunkforwarder/etc/passwd?   Thank you very much! Regards, Juanma   PS: when echoing $SPLUNK_HOME it appears to be blank in other users, but the tools is sending logs correctly to SplunkCloud
Hi Team, I need to decrease the number of indexers used to half, in my current configurations we have site replication factor is 5 in total with origin:3 and site searchfactor is defined as 3 in tot... See more...
Hi Team, I need to decrease the number of indexers used to half, in my current configurations we have site replication factor is 5 in total with origin:3 and site searchfactor is defined as 3 in total and origin:2. My total number of indexers is 24 and I want to decrease the count of indexers to 12. I want to have the complete process of reducing the indexer cluster size so that the buckets which have site information will not be impacted.  
How to extract alphanumeric and numeric values from aline,  both are dynamic values <Alphanumeric>_ETC_RFG: play this message: announcement/<numeric>
I created an alert from the search below, and it emails a pdf - is there a way to add the most recent event from each of the hosts in this search and add it to the email?   metadata type=hosts | wh... See more...
I created an alert from the search below, and it emails a pdf - is there a way to add the most recent event from each of the hosts in this search and add it to the email?   metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen
Empowering businesses with enhanced monitoring and robust observability solutions At Cisco, visibility and control are paramount in today’s dynamic digital landscape. That's why we're thrilled to a... See more...
Empowering businesses with enhanced monitoring and robust observability solutions At Cisco, visibility and control are paramount in today’s dynamic digital landscape. That's why we're thrilled to announce a significant milestone in our journey to empower businesses with robust observability solutions. On January 31, 2024, we launched dashboards for Cisco Cloud Observability, powered by the Cisco Observability platform. This release marks the beginning of a new era in monitoring and insights.  A promising start  This inaugural release of dashboards on Cisco Cloud Observability is just the start of the plans we have in store. We believe in an agile development approach, where we continuously evolve and enhance our offerings based on your needs and feedback. While this initial release is already packed with five out-of-the-box AWS dashboards, rest assured that more are on the horizon.  Your feedback, Our inspiration  At Cisco AppDynamics, we've always valued our customers' opinions. Your feedback is the driving force behind our innovations. The decision to kickstart our dashboards journey with out-of-the-box solutions is a testament to our commitment to addressing your needs. We've heard your voices loud and clear from AppDynamics Dashboarding, and this release is a direct response to your requests and suggestions.  Enhanced filtering capabilities  One of the key features introduced in this first release is an improved filtering experience. We understand the importance of fine-tuning your monitoring to gain actionable insights quickly. With our support for filters, you can now easily narrow down your focus by tags and attributes right at the top of your dashboards. This enhancement will empower you to identify and address issues with greater precision and efficiency.  AWS Dashboards  We have designed user-friendly dashboards that provide a detailed overview and insight into key AWS services such as EC2, EBS, EFS, ELB, & S3. Powered by Infrastructure Collector and CloudWatch, these dashboards are instrumental in monitoring Cloud Services.  The metrics displayed on these dashboards are neatly categorized into sections like memory, CPU, network, among others.   With the help of top-level filters, you can swiftly navigate through the data and focus on instances that are experiencing issues. These dashboards allow you to view the overall health of your cloud services and effortlessly isolate any problems.  In addition to these, we're in the process of developing more out-of-the-box dashboards for AWS, GCP, and Azure, which we plan to release soon.   A sneak peek into the future  While our initial release is designed to provide you with immediate value, we have big plans for the future. We will be releasing new out-of-the-box Dashboards for APM, Cost Insight Troubleshooting, and more, in an agile manner. We're excited to announce that the Edit experience for our Dashboards is set to launch in the early summer of 2024. This addition will enable you to customize and tailor your dashboards to your specific needs, ensuring a more personalized and efficient monitoring experience.  For example, we’ll have dashboards that give you an overview Health of all your Entity types, including and not limited to BTs and services you can share the fullstack view with your CIOs as well.  Your journey with Cisco Cloud Observability  We're not just building software; we're forging a partnership with you on your observability journey. With our dashboards, we aim to simplify complex data into meaningful insights that drive better decisions and faster actions. Your success is our success, and we're here to support you every step of the way.  How to get started  Getting started with dashboards on Cisco Cloud Observability is easy. Just log in to your Cisco account and explore the new Dashboards tab. You'll find a curated selection of out-of-the-box dashboards, ready to provide valuable insights into your AWS environments. Learn more in the documentation.    Keep the conversation going  We want to continue hearing from you. Your feedback has been instrumental in shaping this release, and we are eager to learn how dashboards for Cisco Cloud Observability are enhancing your monitoring experience.   Reach out to us through our Feedback button on the Cisco Cloud Observability, through support channels, or here in the Community forums, and let us know your thoughts, suggestions, and any challenges you're facing. Your input will drive our future enhancements and ensure that our solutions remain tailored to your needs.   As well, if you’re open to participating in our user-research sessions, please register here.    Conclusion  The launch of dashboards for Cisco Cloud Observability is a significant milestone for us at Cisco, and we couldn't be more excited to embark on this journey with you. Our commitment to providing valuable, customer-driven solutions remains unwavering. As we move forward, our agile approach will ensure that we adapt and evolve our offerings to meet your ever-changing requirements.  Thank you for your trust and partnership. Together, we'll usher in a new era of observability and insights, empowering you to thrive in the digital age. Stay tuned for more updates, and let's continue shaping the future of monitoring and observability, one dashboard at a time.  About me  Deena Shanghavi is a Director of Product Management responsible for Cisco Observability Platform Dashboards and Unified Observability Experience.  Deena is passionate about creating simple user experiences and providing out-of-the-box experiences so that users have low or zero configuration.  When not working, Deena loves to read, listen to audiobooks, paint, meditate, and do Yin Yoga. 
Hello I would like a search to show the last entry of host="1.1.1.1", and show the full entry.   Thank you
Hello, I'm trying to get a solid answer on what Splunk's laws are regarding using the Splunk Enterprise free license (0.50 GB/day) on a production system in a for-profit company.  Is this allowed or... See more...
Hello, I'm trying to get a solid answer on what Splunk's laws are regarding using the Splunk Enterprise free license (0.50 GB/day) on a production system in a for-profit company.  Is this allowed or are we required to buy the 1GB minimum license?   From the Splunk Enterprise download site: https://www.splunk.com/en_us/download/splunk-enterprise.html, it clearly states that "After 60 days you can convert to a perpetual free license...", so if my ingestion is below the 500MB/day limit, but the license in on a production system, is this legal or would I have to buy a 1GB license? Note, I haven't actually deployed Splunk Enterprise on a production system, I'm gathering all the facts before I make the move to production. Thanks.