All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

For example: |  tstats count from datamodel=test where * by test.url, test.user  | rename test.* AS * | search NOT     [ | inputlookup list_of_domains        | rename domain AS url        | fie... See more...
For example: |  tstats count from datamodel=test where * by test.url, test.user  | rename test.* AS * | search NOT     [ | inputlookup list_of_domains        | rename domain AS url        | fields url ] | table url, user   I want this to show me the urls from the DM that do NOT appear in the lookup, and then give me the corresponding usernames from the DM. But, this is not working properly. When I run this search, I still see some of the urls that are in the lookup. Please help!
  I have this query and I want to add another data series/line to this chart. How can I do it? index="eniq_voice" |where localDn="ManagedElement=TO5CSCF01" |bucket _time span=15m |stats ma... See more...
  I have this query and I want to add another data series/line to this chart. How can I do it? index="eniq_voice" |where localDn="ManagedElement=TO5CSCF01" |bucket _time span=15m |stats max(CPULoad_Total) as CPULoad_Total by localDn _time |timechart max(CPULoad_Total) as CPULoad_Total I want to add this to the query with a linechart: |stats max(CPULoad_Max) as CPULoad_Max by localDn _time |timechart max(CPULoad_Max) as CPULoad_Max by localDn _time
Which do you use or side with please? Which do you think is the best for functionality & using bandwidth? Thank u for your time & consideration?
I keep getting an error message in our messages section at the top, stating that Search head cluster member ____ is having problems pulling configurations from the search head cluster captain.  Chang... See more...
I keep getting an error message in our messages section at the top, stating that Search head cluster member ____ is having problems pulling configurations from the search head cluster captain.  Changes from the other members are not replicating to this member, and changes on this member are not replicating to other members. Consider performing a destructive configuration resync on this search head cluster member.    I've performed destructive resync several times, and also when I use the show kvstore status function from any searchhead including the supposedly affected one it claims there are no actual issues with the searchhead, it just says it is a non-captain KV store member just like the rest.  If I try to manually close out the error message it just comes back. What else can I look into here?
Hello, I'm looking to reference a specific artifact from the Phantom Playbook Visual Editor. For example, a Phantom: Update Artifact block takes two parameters: artifact_id and cef_json. The list of... See more...
Hello, I'm looking to reference a specific artifact from the Phantom Playbook Visual Editor. For example, a Phantom: Update Artifact block takes two parameters: artifact_id and cef_json. The list of default datapaths for artifact_id all follow the format of artifact:*.<field>, where the wildcard causes the update to occur on ALL artifacts. I would instead like to reference the first artifact in the container, so that only the first artifact is updated. Is there a way to construct the datapath to accomplish this?   The current workaround I have for this is to use a Custom Function to output the first artifact object of the container, but this only creates a snapshot of the artifact object at the time the function is called; If I update the artifact after calling the function, I'll need to call the function again to get the updated artifact object values. The closest thing I've seen to this is the phantom.collect() API call, in which you can specify a datapath with a specific label (ie. phantom.collect(container, "artifact:uniqueLabel")), where you can specify a label and only the artifacts with the given label is returned, but this same syntax does not work in the Playbook Visual Editor.
I'm trying to extract field That looks like "Alert-source-key":"[\"abcdd-gdfc-mb40-a801-e40fd9db481e\"]"     I have tried this "Alert-source-key":"(?P<Alert_key>[^"]+)" but i'm getting results lik... See more...
I'm trying to extract field That looks like "Alert-source-key":"[\"abcdd-gdfc-mb40-a801-e40fd9db481e\"]"     I have tried this "Alert-source-key":"(?P<Alert_key>[^"]+)" but i'm getting results like "[/" since it is checking for only 
Hi  Need help converting 210910085155 to yymmddhhmmss index=mydata | eval fields=split(EventMsg,",") | eval file_string=mvindex(fields,0) | eval CreatedDate=mvindex(fields,17) | table Creat... See more...
Hi  Need help converting 210910085155 to yymmddhhmmss index=mydata | eval fields=split(EventMsg,",") | eval file_string=mvindex(fields,0) | eval CreatedDate=mvindex(fields,17) | table CreatedDate CreatedDate =210910085155 need to covert it to Date
We have around 80+ accounts in AWS so far, and we spin up new accounts every so often. We're using the Splunk Add-on for AWS (#1876). Configuring each account manually is a chore. Is there any way t... See more...
We have around 80+ accounts in AWS so far, and we spin up new accounts every so often. We're using the Splunk Add-on for AWS (#1876). Configuring each account manually is a chore. Is there any way to automatically configure new accounts without having to use the UI?
Hello @jkat54 , @richgalloway    I am new to the add-on and am not able to figure out how to make API calls with this. Attempting to use the  OpenWeatherMap api below { OpenWeatherMap API - Free We... See more...
Hello @jkat54 , @richgalloway    I am new to the add-on and am not able to figure out how to make API calls with this. Attempting to use the  OpenWeatherMap api below { OpenWeatherMap API - Free Weather Data (for Developers) (rapidapi.com) }       | curl method=GET uri=https://community-open-weather-map.p.rapidapi.com/weather user=<mysplunkusername> pass=<mysplunkpassword> headerfield= { 'x-rapidapi-host': "community-open-weather-map.p.rapidapi.com", 'x-rapidapi-key': "API_key_from_rapid_API" } data={"q":"London","lat":"0","lon":"0","callback":"test","id":"2172797","lang":"null","units":"imperial","mode":"json"}     instead of getting the data i am getting below output. {attached screenshot} can you please tell me what am i doing wrong
I am getting started using DS to deploy new configurations to UFs. Need to view the list of Server classes , what they have / do & how to add / edit the server classes. Thanks in advance
We are using Splunk Add-on for AWS  (Splunk Cloud) and we succeeded to configure Description input to fetch different kind of usual aws resources descriptions: ec2 instance, s3, ... Yet we are missi... See more...
We are using Splunk Add-on for AWS  (Splunk Cloud) and we succeeded to configure Description input to fetch different kind of usual aws resources descriptions: ec2 instance, s3, ... Yet we are missing some important resources that we need like FSx For Lustre file systems. The plugin seems to have hard coded list of resources: How to add FSx ? I there a way to extend it on our own, we can write the fetch calls if needed.
Hello, we have around 1200 systems that have UF's on them.  They are a mixture of both Windows and Linux devices.  I'm curious if it's possible to use a platform like Tanium or SCCM to push the UF .M... See more...
Hello, we have around 1200 systems that have UF's on them.  They are a mixture of both Windows and Linux devices.  I'm curious if it's possible to use a platform like Tanium or SCCM to push the UF .MSI down to those systems to initiate the upgrade that way? Your input is GREATLY appreciated.  Thanks.
Hello guys, does someone know, whether it is possible, to do a matching of search results with previous results of the same search? I have a machine, that can enter different modes. Just for the ex... See more...
Hello guys, does someone know, whether it is possible, to do a matching of search results with previous results of the same search? I have a machine, that can enter different modes. Just for the example lets say, the machine can enter mode A, B or C. I receive an heartbeat every few seconds of hundred of these machines, which leads to a very large dataset. But I am not interested in the heartbeat, I am interested in the transition of the modes. Example: Time Machine_ID Mode 10:00:00 1 A 10:00:01 2 C 10:00:02 2 C 10:00:03 1 B 10:00:04 2 B   So what I am basically interested in here is the transition of machine 1 from mode A to B and of machine 2 from C to B.  In other words: I am searching for heartbeats, where the mode is different than the mode of the previous heartbeat of the same machine_ID. At the end, my result would look something like this _time _time_old_Mode machine_ID new_mode old_Mode 10:00:03 10:00:00 1 B A 10:00:04 10:00:02 2 B C   I have tried subsearches, but I was not sucessful. The simplified search for getting the heartbeat is currently: index="heartbeat" | rex field=_raw "......(?P<MODE>.......)"| fields _time ID MODE Performance is not crucial, as it is planned to run this at night for a summary index. Thanks in advance!  Best Regards
I notice some include .csv files. Do these .csv s need updating? Or do they stay stale? How are Data sets updated? Please advise. Thank u very much.
Hey All, Recently, I have migrated data from some indexes from a distributed Splunk instance to clustered Splunk instance using bucket migration approach. After the indexes data migration , I can s... See more...
Hey All, Recently, I have migrated data from some indexes from a distributed Splunk instance to clustered Splunk instance using bucket migration approach. After the indexes data migration , I can see the same data and event counts for each indexes in both the old and new instance for a specific time range set manually from the time range filter. But, since the older instance is in EDT time zone and new instance is in UTC time zone,  when I am comparing the dashboards for validation purpose which uses those indexes , I can see the count mismatch because of the time zone difference.   I tried changing the preference to same time zones in both the instances but its not working. Can anyone please help and let me know how can this issue be resolved so that the dashboards can be validated without setting the time range manually every time.
Hi!, I have recently deleted an user. I should not have done that.... Can I restore it? If anyone has any ideas I'd appreciate it greatly. Thanks in advance.
Hi guys! Recently I started to use Splunk Cloud with Jenkins Add-on, which works great. I have linked multiple Jenkins instances - a lot of them were testing Jenkins instances with dummy names. Now ... See more...
Hi guys! Recently I started to use Splunk Cloud with Jenkins Add-on, which works great. I have linked multiple Jenkins instances - a lot of them were testing Jenkins instances with dummy names. Now after tests are done, I'd like to remove from Splunk in the Jenkins add-on section in drop down list of 'Jenkins Master' all the dummy names. I tried to delete all jenkins related indexes, http event collector and uninstalled Jenkins add-on it self. After reinstalling it, I still can see all of those dummy jenkins master names. I'd be grateful if you could help me to remove them. Thanks!
Hello, Very new to the SPL world so forgive me if this is basic.  I am looking to create a visualization that charts out  1) Errors reported for that day of the week in the last 7 days 2) Provide... See more...
Hello, Very new to the SPL world so forgive me if this is basic.  I am looking to create a visualization that charts out  1) Errors reported for that day of the week in the last 7 days 2) Provides a baseline of average errors per day of the week in the last 60 days. So far I have, as an example: index=main sourcetype="access_combined_wcookie" status>=500 | chart count as "Server Errors" by date_wday | eventstats avg(Server Errors) This gives me the running average for errors by not   Day of the Week     Number of Errors     60 DAY Average errors for that day of the week Monday                      14                                    12.38 Tuesday                      10                                    13.69 etc...and be able to chart this. Any help and explanation of the how would be much appreciated.  Thank you in advance.
Hello!   Is there any information if virtual BOTS at .conf '21 would be organised for other timezones than AMER? I am mostly interested in EMEA one. I could find only one date and time and that is ... See more...
Hello!   Is there any information if virtual BOTS at .conf '21 would be organised for other timezones than AMER? I am mostly interested in EMEA one. I could find only one date and time and that is 1AM in here, not too convenient time to play CTFs.   Regards, Damian
I'm very stuck, how can I have a streamstats function accumulate a total and reset at 9.00am every day?  It's straightforward if I have an event at 9.00am, but if the last event was at say 8.55am, t... See more...
I'm very stuck, how can I have a streamstats function accumulate a total and reset at 9.00am every day?  It's straightforward if I have an event at 9.00am, but if the last event was at say 8.55am, then the next event is at 9.15am, the reset occurs, however, it will continue to reset for all events which occur between 9.00am and 9.59am as the statement remains true throughout the hour below in my example. index=main | eval Hour=strftime(_time,"%H") | streamstats reset_after="("Hour==09")" sum(Result) as Total I tried to experiment with specifying the minute, but the same situation exists if the 9.00am minute does not exist. index=main | eval Hour=strftime(_time,"%H%M") | streamstats reset_after="("Hour==0900")" sum(Result) as Total I think I need to either make a lookup to create an event every 9 am for each day, but I couldn't figure that out if the time range was greater than one day. I experimented with makeresults to create an event, but this needed an append which messed up all of my other parts of the query. I think the most elegant way to do this is to have an event created for every 9 am before the query is made, but I can't figure it out, any advice/ideas are welcomed!   Dave