All Topics

Top

All Topics

Which do you use or side with please? Which do you think is the best for functionality & using bandwidth? Thank u for your time & consideration?
I keep getting an error message in our messages section at the top, stating that Search head cluster member ____ is having problems pulling configurations from the search head cluster captain.  Chang... See more...
I keep getting an error message in our messages section at the top, stating that Search head cluster member ____ is having problems pulling configurations from the search head cluster captain.  Changes from the other members are not replicating to this member, and changes on this member are not replicating to other members. Consider performing a destructive configuration resync on this search head cluster member.    I've performed destructive resync several times, and also when I use the show kvstore status function from any searchhead including the supposedly affected one it claims there are no actual issues with the searchhead, it just says it is a non-captain KV store member just like the rest.  If I try to manually close out the error message it just comes back. What else can I look into here?
Hello, I'm looking to reference a specific artifact from the Phantom Playbook Visual Editor. For example, a Phantom: Update Artifact block takes two parameters: artifact_id and cef_json. The list of... See more...
Hello, I'm looking to reference a specific artifact from the Phantom Playbook Visual Editor. For example, a Phantom: Update Artifact block takes two parameters: artifact_id and cef_json. The list of default datapaths for artifact_id all follow the format of artifact:*.<field>, where the wildcard causes the update to occur on ALL artifacts. I would instead like to reference the first artifact in the container, so that only the first artifact is updated. Is there a way to construct the datapath to accomplish this?   The current workaround I have for this is to use a Custom Function to output the first artifact object of the container, but this only creates a snapshot of the artifact object at the time the function is called; If I update the artifact after calling the function, I'll need to call the function again to get the updated artifact object values. The closest thing I've seen to this is the phantom.collect() API call, in which you can specify a datapath with a specific label (ie. phantom.collect(container, "artifact:uniqueLabel")), where you can specify a label and only the artifacts with the given label is returned, but this same syntax does not work in the Playbook Visual Editor.
I'm trying to extract field That looks like "Alert-source-key":"[\"abcdd-gdfc-mb40-a801-e40fd9db481e\"]"     I have tried this "Alert-source-key":"(?P<Alert_key>[^"]+)" but i'm getting results lik... See more...
I'm trying to extract field That looks like "Alert-source-key":"[\"abcdd-gdfc-mb40-a801-e40fd9db481e\"]"     I have tried this "Alert-source-key":"(?P<Alert_key>[^"]+)" but i'm getting results like "[/" since it is checking for only 
Hi  Need help converting 210910085155 to yymmddhhmmss index=mydata | eval fields=split(EventMsg,",") | eval file_string=mvindex(fields,0) | eval CreatedDate=mvindex(fields,17) | table Creat... See more...
Hi  Need help converting 210910085155 to yymmddhhmmss index=mydata | eval fields=split(EventMsg,",") | eval file_string=mvindex(fields,0) | eval CreatedDate=mvindex(fields,17) | table CreatedDate CreatedDate =210910085155 need to covert it to Date
We have around 80+ accounts in AWS so far, and we spin up new accounts every so often. We're using the Splunk Add-on for AWS (#1876). Configuring each account manually is a chore. Is there any way t... See more...
We have around 80+ accounts in AWS so far, and we spin up new accounts every so often. We're using the Splunk Add-on for AWS (#1876). Configuring each account manually is a chore. Is there any way to automatically configure new accounts without having to use the UI?
Hello @jkat54 , @richgalloway    I am new to the add-on and am not able to figure out how to make API calls with this. Attempting to use the  OpenWeatherMap api below { OpenWeatherMap API - Free We... See more...
Hello @jkat54 , @richgalloway    I am new to the add-on and am not able to figure out how to make API calls with this. Attempting to use the  OpenWeatherMap api below { OpenWeatherMap API - Free Weather Data (for Developers) (rapidapi.com) }       | curl method=GET uri=https://community-open-weather-map.p.rapidapi.com/weather user=<mysplunkusername> pass=<mysplunkpassword> headerfield= { 'x-rapidapi-host': "community-open-weather-map.p.rapidapi.com", 'x-rapidapi-key': "API_key_from_rapid_API" } data={"q":"London","lat":"0","lon":"0","callback":"test","id":"2172797","lang":"null","units":"imperial","mode":"json"}     instead of getting the data i am getting below output. {attached screenshot} can you please tell me what am i doing wrong
I am getting started using DS to deploy new configurations to UFs. Need to view the list of Server classes , what they have / do & how to add / edit the server classes. Thanks in advance
We are using Splunk Add-on for AWS  (Splunk Cloud) and we succeeded to configure Description input to fetch different kind of usual aws resources descriptions: ec2 instance, s3, ... Yet we are missi... See more...
We are using Splunk Add-on for AWS  (Splunk Cloud) and we succeeded to configure Description input to fetch different kind of usual aws resources descriptions: ec2 instance, s3, ... Yet we are missing some important resources that we need like FSx For Lustre file systems. The plugin seems to have hard coded list of resources: How to add FSx ? I there a way to extend it on our own, we can write the fetch calls if needed.
Hello, we have around 1200 systems that have UF's on them.  They are a mixture of both Windows and Linux devices.  I'm curious if it's possible to use a platform like Tanium or SCCM to push the UF .M... See more...
Hello, we have around 1200 systems that have UF's on them.  They are a mixture of both Windows and Linux devices.  I'm curious if it's possible to use a platform like Tanium or SCCM to push the UF .MSI down to those systems to initiate the upgrade that way? Your input is GREATLY appreciated.  Thanks.
Hello guys, does someone know, whether it is possible, to do a matching of search results with previous results of the same search? I have a machine, that can enter different modes. Just for the ex... See more...
Hello guys, does someone know, whether it is possible, to do a matching of search results with previous results of the same search? I have a machine, that can enter different modes. Just for the example lets say, the machine can enter mode A, B or C. I receive an heartbeat every few seconds of hundred of these machines, which leads to a very large dataset. But I am not interested in the heartbeat, I am interested in the transition of the modes. Example: Time Machine_ID Mode 10:00:00 1 A 10:00:01 2 C 10:00:02 2 C 10:00:03 1 B 10:00:04 2 B   So what I am basically interested in here is the transition of machine 1 from mode A to B and of machine 2 from C to B.  In other words: I am searching for heartbeats, where the mode is different than the mode of the previous heartbeat of the same machine_ID. At the end, my result would look something like this _time _time_old_Mode machine_ID new_mode old_Mode 10:00:03 10:00:00 1 B A 10:00:04 10:00:02 2 B C   I have tried subsearches, but I was not sucessful. The simplified search for getting the heartbeat is currently: index="heartbeat" | rex field=_raw "......(?P<MODE>.......)"| fields _time ID MODE Performance is not crucial, as it is planned to run this at night for a summary index. Thanks in advance!  Best Regards
I notice some include .csv files. Do these .csv s need updating? Or do they stay stale? How are Data sets updated? Please advise. Thank u very much.
Hey All, Recently, I have migrated data from some indexes from a distributed Splunk instance to clustered Splunk instance using bucket migration approach. After the indexes data migration , I can s... See more...
Hey All, Recently, I have migrated data from some indexes from a distributed Splunk instance to clustered Splunk instance using bucket migration approach. After the indexes data migration , I can see the same data and event counts for each indexes in both the old and new instance for a specific time range set manually from the time range filter. But, since the older instance is in EDT time zone and new instance is in UTC time zone,  when I am comparing the dashboards for validation purpose which uses those indexes , I can see the count mismatch because of the time zone difference.   I tried changing the preference to same time zones in both the instances but its not working. Can anyone please help and let me know how can this issue be resolved so that the dashboards can be validated without setting the time range manually every time.
Hi!, I have recently deleted an user. I should not have done that.... Can I restore it? If anyone has any ideas I'd appreciate it greatly. Thanks in advance.
View our Tech Talk: DevOps Edition, Maximizing the Performance of Your Kubernetes Deployment    Identifying issues in a microservices environment deployed with Kubernetes can become more cha... See more...
View our Tech Talk: DevOps Edition, Maximizing the Performance of Your Kubernetes Deployment    Identifying issues in a microservices environment deployed with Kubernetes can become more challenging than with typical monolithic deployments. As requests traverse between different layers of the stack and across multiple services, modern monitoring tools must monitor these interrelated layers while efficiently correlating application and infrastructure behavior to streamline troubleshooting. Overall to get the most out of Kubernetes, understanding your deployment is key. This webinar will discuss how to maximize the performance of your Kubernetes deployment with Splunk Infrastructure Monitoring.  Tune in to learn about: How to setup Splunk Infrastructure Monitoring to monitor your Kubernetes deployment. What are the key metrics to look for when monitoring Kubernetes. How the Splunk OpenTelemtry collector gathers metrics.  How to identify missing resource limits within CustomResourceDefinition files and why they are important to have. 
View our Tech Talk: Security Edition, Intelligence Management with Splunk + TruSTAR  Manual vetting and data from multiple sources cause analysts to waste much of their time data wrangling, ta... See more...
View our Tech Talk: Security Edition, Intelligence Management with Splunk + TruSTAR  Manual vetting and data from multiple sources cause analysts to waste much of their time data wrangling, taking time away from alerts that matter the most. Analysts need the ability to normalize and enrich multiple data sources for an objective view of security events.  The TruSTAR Unified App for Splunk Enterprise and Enterprise Security helps security professionals analyze notable events and leverage intelligence to quickly understand threat context and prioritize and accelerate triage.  Tune in to learn how to: Customize data ingest preferences using TruSTAR Indicator Prioritization Intel Workflows Automatically download observables into Splunk KV stores Enrich and prioritize notable events in Splunk Enterprise Security 
View our Tech Talk: Platform Edition, Simplify Ticket Remediation with Machine Learning  Machine learning (ML) can be applied to help companies leverage intelligence in their operations. In this we... See more...
View our Tech Talk: Platform Edition, Simplify Ticket Remediation with Machine Learning  Machine learning (ML) can be applied to help companies leverage intelligence in their operations. In this webinar, we will discuss how the Splunk Machine Learning Toolkit (MLTK) can be extended to create domain-specific guided Assistants that can simplify workflows for users such as IT administrators. Tune in to learn: How ML can provide new insights into ticket management A deep dive demo into an app powered by the Machine Learning Toolkit (MLTK) that can help admins mine their data for patterns and easily identify candidates for automated remediation How this use case can complement IT Service Intelligence (ITSI), our premium monitoring and analytics solution powered by artificial intelligence for IT Operations (AIOps)
Hi guys! Recently I started to use Splunk Cloud with Jenkins Add-on, which works great. I have linked multiple Jenkins instances - a lot of them were testing Jenkins instances with dummy names. Now ... See more...
Hi guys! Recently I started to use Splunk Cloud with Jenkins Add-on, which works great. I have linked multiple Jenkins instances - a lot of them were testing Jenkins instances with dummy names. Now after tests are done, I'd like to remove from Splunk in the Jenkins add-on section in drop down list of 'Jenkins Master' all the dummy names. I tried to delete all jenkins related indexes, http event collector and uninstalled Jenkins add-on it self. After reinstalling it, I still can see all of those dummy jenkins master names. I'd be grateful if you could help me to remove them. Thanks!
Hello, Very new to the SPL world so forgive me if this is basic.  I am looking to create a visualization that charts out  1) Errors reported for that day of the week in the last 7 days 2) Provide... See more...
Hello, Very new to the SPL world so forgive me if this is basic.  I am looking to create a visualization that charts out  1) Errors reported for that day of the week in the last 7 days 2) Provides a baseline of average errors per day of the week in the last 60 days. So far I have, as an example: index=main sourcetype="access_combined_wcookie" status>=500 | chart count as "Server Errors" by date_wday | eventstats avg(Server Errors) This gives me the running average for errors by not   Day of the Week     Number of Errors     60 DAY Average errors for that day of the week Monday                      14                                    12.38 Tuesday                      10                                    13.69 etc...and be able to chart this. Any help and explanation of the how would be much appreciated.  Thank you in advance.
Hello!   Is there any information if virtual BOTS at .conf '21 would be organised for other timezones than AMER? I am mostly interested in EMEA one. I could find only one date and time and that is ... See more...
Hello!   Is there any information if virtual BOTS at .conf '21 would be organised for other timezones than AMER? I am mostly interested in EMEA one. I could find only one date and time and that is 1AM in here, not too convenient time to play CTFs.   Regards, Damian