All Topics

Top

All Topics

I'm using dashboard studio and have a geomap, I want to set the colors based on series, which I can do on a line or area using seriesColorsByField, but the documentation for map only has dataColors a... See more...
I'm using dashboard studio and have a geomap, I want to set the colors based on series, which I can do on a line or area using seriesColorsByField, but the documentation for map only has dataColors and seriesColors, both of which appear to be ordered, so if a value is not present the colors would shift. So how can I do similar to seriesColorsByField on a map graph?
I have an embedded pie chart where I'm trying to show something rather than "no results found" with the red exclamation mark, and this is making people think the report isn't working. I've tried seve... See more...
I have an embedded pie chart where I'm trying to show something rather than "no results found" with the red exclamation mark, and this is making people think the report isn't working. I've tried several methods to address this, but I can't get the results I would like.   Query and results: | inputlookup my_lookup | search Exposure=External | stats count by Status | eval pie_slice = count + " " + Status | fields pie_slice, count   Is it possible to add something to the query so that when there are zero results I get this?
Hello Members, Here at the company, we are going to carry out the total migration of Splunk Enterprise, which is currently in AWS Argentina, to AWS Norte Virginia. I would like to ask for some help... See more...
Hello Members, Here at the company, we are going to carry out the total migration of Splunk Enterprise, which is currently in AWS Argentina, to AWS Norte Virginia. I would like to ask for some help here, which are the best practices to follow for this type of migration and/or which server do we start transferring data through, heavy forwarder, indexer, search head, deployment server? Would there be any difference depending on the order? We have a large environment. As a side note, let's take a SnapShot of the environment before starting the migration. We are studying to use CloudEndure for this job, would it be the best option?
Register and ask questions here. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, March 15, 2023 at 1pm PT / 4pm ET.   Join our bi-weekly Of... See more...
Register and ask questions here. This thread is for the Community Office Hours session on Getting Data In (GDI) to Splunk Platform on Wed, March 15, 2023 at 1pm PT / 4pm ET.   Join our bi-weekly Office Hour series where technical Splunk experts answer questions and provide how-to guidance on a different topic every month! This is your opportunity to ask questions related to your specific GDI challenge or use case, like how to onboard common data sources (AWS, Azure, Windows, *nix, etc.), using forwarders, apps to get data in, Data Manager (Splunk Cloud Platform), ingest actions, archiving your data, and anything else you’d like to learn!   There are two 30-minute sessions in this series. You can choose to attend one or both (each session will cover a different set of questions):   Wednesday, March 15th – 1:00 pm PT / 4:00 pm ET Wednesday, March 29th – 1:00 pm PT / 4:00 pm ET   Please submit your questions below as comments in advance. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions with upvotes will be prioritized. After that, we will go in order of the questions posted below, then will open the floor up to live Q&A with meeting participants. If there’s a quick answer available, we’ll post as a direct reply.   Look forward to connecting!  
Hello  I have a question because I'm in trouble.  `EasyVistaGeneric` "Statut" = "En service" AND ("Identifiant réseau"="IMP*" OR "Identifiant réseau"="ECR*" OR "Identifiant réseau"="PCW*") |dedup... See more...
Hello  I have a question because I'm in trouble.  `EasyVistaGeneric` "Statut" = "En service" AND ("Identifiant réseau"="IMP*" OR "Identifiant réseau"="ECR*" OR "Identifiant réseau"="PCW*") |dedup "Identifiant réseau" |eval entité=mvindex(split('Entité (complète)',"/"),0) | timechart span=1y count by entité useother=f usenull=f   I want to combine the results of  entité : "Commune de Toulon"  + "METROPOLE TPM" +" MTPM" + "Toulon" in a same field that we can named as RESULT : -> so I want to have : RESULT ="Commune de Toulon"  + "METROPOLE TPM" +" MTPM" + "Toulon"   Can you help me please ?    Thanks
Hi Splunkers, Reaching out for help This is a sample _raw event:  12.23.454, abcd, 12.34.45,abc@gmail.com,"[EXTERNAL] 300,000+ software product demos",SEND,OK i want to split  this by using t... See more...
Hi Splunkers, Reaching out for help This is a sample _raw event:  12.23.454, abcd, 12.34.45,abc@gmail.com,"[EXTERNAL] 300,000+ software product demos",SEND,OK i want to split  this by using the split command ,  using  comma as a delimiter  and assign to different fields. However,  "EXTERNAL] 300,000+ software product demos"  is a single field   and i dont want it to be split into multiple fields  In few  other events, comma is not present . For instance: 12.23.454, abcd, 12.34.45,abc@gmail.com,  "[EXTERNAL] 300000+ software product demos"  ,SEND,OK   How do i ensure that these values are assigned to the field in the events.  "EXTERNAL] 300,000+ software product demos" "[EXTERNAL] 300000+ software product demos"   Thanks for your help         
Hello I have a question I am working on this map. However, when "there are no resulst returned", I want to have the empty map and not this :    What can I do this ?  <dashboard... See more...
Hello I have a question I am working on this map. However, when "there are no resulst returned", I want to have the empty map and not this :    What can I do this ?  <dashboard version="1.1"> <label>HPE IMC</label> <row> <panel> <title>La liste des alarmes</title> <viz type="location_tracker_app.location_tracker"> <search> <query>index="imcfault" sourcetype="st_imcfault" severity=3 OR severity=4 | lookup switchs.csv ip AS sourceIp | rex field=location "^(?&lt;latitude&gt;.+?), (?&lt;longitude&gt;.+?)$" | eval latitude=if(isnull(latitude),"43.123888",latitude) | eval longitude=if(isnull(longitude),"5.953356",longitude) | table _time latitude longitude faultDesc</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="height">800</option> <option name="location_tracker_app.location_tracker.interval">10</option> <option name="location_tracker_app.location_tracker.showTraces">0</option> <option name="location_tracker_app.location_tracker.staticIcon">none</option> <option name="location_tracker_app.location_tracker.tileSet">light_tiles</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> </dashboard>  
Details for scheduled sessions can be found in the Community Office Hours section of the Splunk Community. What’s the goal of Community Office Hours? Review and answer questions submitted by ... See more...
Details for scheduled sessions can be found in the Community Office Hours section of the Splunk Community. What’s the goal of Community Office Hours? Review and answer questions submitted by office hour participants Help new customers get up-and-Splunking quickly  Provide hands-on guidance and best practices to Splunk admins and users of all experience levels Share tips and tricks that may not be well-known or obvious Who can attend these sessions? Community Office Hours are designed for onboarding and early-maturity customers, but are open to anyone interested in learning more about Splunk Platform in a live, hands-on environment. You must register to attend a session.  How do I submit questions? Please submit questions in advance by responding to the session thread in Community or by posting in the #office-hours user Slack channel (request access here).  Please note:  We will prioritize pre-submitted questions first, then will open the floor up to live Q&A.  We will prioritize questions that we feel are good for the broader group, such as Questions that are frequently asked,  Questions that receive high upvotes in Community (or Slack).  While we will attempt to answer all questions, we are limited on time. What’s the session format? A majority of the 30-minute session (20-25 mins) will be dedicated to answering pre-submitted questions.  Last portion (5-10 mins) will be an open Q&A, where participants can unmute or post their question in the chat. In general, we'll target 5 minutes per question to help us get to everyone.  Please note: We’ll work in a demo environment, so it may not match your individual environment. Who’s leading the discussion? Subject matter experts from various Splunk technical, product, and support teams will be available to answer questions.
I have a situation where I have a multi-value field that can contain anywhere from 1 to 2000 or more values in a day.  each value is exactly 38 characters long.  Each 38 character string is a GUID fo... See more...
I have a situation where I have a multi-value field that can contain anywhere from 1 to 2000 or more values in a day.  each value is exactly 38 characters long.  Each 38 character string is a GUID for another application, and that application can only accept up to 1000 characters at a time.  What I'd like to do is chunk the strings together in complete blocks of 20 which would be 760 characters per block, and then call them by mvindex, but I haven't figured out how to do this in eval so that it will create it whether I have 1 string, 23 strings, or 900 strings to evaluate, since that is always going to be the unknown variable.  Any assistance on how to solve this would be very helpful.    
Does anyone have ALERT MANAGER APP. I have the Alert Manager app 3.0.11. On the incident posture dashboard next to the alert that triggered there are options, run incident in search, edit incident, a... See more...
Does anyone have ALERT MANAGER APP. I have the Alert Manager app 3.0.11. On the incident posture dashboard next to the alert that triggered there are options, run incident in search, edit incident, assign to me and a lightening bolt. When I select edit incident a window pops up and there is another option drop down to assign a Owner. As I pull down the drop down to select an owner all of the users names are on a white background and I can not see them unless I scroll over the white space. Does anyone know how to change the background color to black for the dropdown, so I can see usernames in the dropdown.
Our Dev Splunk instance was recently upgraded from Splunk Enterprise 8.2.2.1 to 9.0.2. I am getting the following error on our primary Search Head from python.log on splunkd restart: ERROR config... See more...
Our Dev Splunk instance was recently upgraded from Splunk Enterprise 8.2.2.1 to 9.0.2. I am getting the following error on our primary Search Head from python.log on splunkd restart: ERROR config:149 - [HTTP 401] Client is not authenticated Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/config.py", line 147, in getServerZoneInfoNoMem return times.getServerZoneinfo() File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/times.py", line 163, in getServerZoneinfo serverStatus, serverResp = splunk.rest.simpleRequest('/search/timeparser/tz', sessionKey=sessionKey) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 625, in simpleRequest raise splunk.AuthenticationFailed splunk.AuthenticationFailed: [HTTP 401] Client is not authenticated I did a re-scan of everything using the newest version of the Upgrade Readiness App (off Splunkbase). Some apps did have a Python warning. I verified currently installed versions of each app (with the exception of Splunk Enterprise package-included apps like Splunk Secure Gateway and Splunk RapidDiag) and the documentation states our installed versions are compatible with Enterprise 9.0. It does not appear that any installed apps are using a deprecated version of python. I also ran the following command and verified our python version as Python 3.7.11: splunk cmd python -V After combing over known issues for the 9.0 release and other Answers threads I’ve had no luck. I don’t know if this errors is meaningful so direction would be advised. Thank you!
index=data severity IN ("critical","high","medium","low") | eval TopHost = [ search index=tenable severity IN ("critical","high","medium","low") | where len(dnsName)>0 | dedup dnsName,solution |... See more...
index=data severity IN ("critical","high","medium","low") | eval TopHost = [ search index=tenable severity IN ("critical","high","medium","low") | where len(dnsName)>0 | dedup dnsName,solution | dedup dnsName,pluginText | rex field=pluginName "^(?<VulnName>(?:\w+\s+){2})" | dedup dnsName,VulnName | top limit=1 dnsName | rename dnsName as query | fields query | head 1] | where dnsName=TopHost | table dnsName, ip   My query above works, but missing one thing. Right now it is getting the first result ( using head command ). I am trying to do first 5 results and store that to my eval variable. I tried to change head 5 but got errors. Any help is appreciated. Thanks Attached error
Hello, I have a .csv file with 2 columns: IoC and added_timestamp I did compare the data and I get a few matches, but what I want is to use just a portion of the .csv. Based on added_timestamp co... See more...
Hello, I have a .csv file with 2 columns: IoC and added_timestamp I did compare the data and I get a few matches, but what I want is to use just a portion of the .csv. Based on added_timestamp column I want to compare the IoC added in .csv in the last 7 days. Can someone help me to accomplish this ? Thank you in advance.
Hi Team,  Seeing this error when I go into the Mission Control App.  Any ideas? Thanks, Mike
Hi team, I heard that MC went GA.  Congratulations!  So, now that this is GA, is this our perm SOAR instance and to confirm that we have 100 actions per day as part of using MC?
I am attempting to calculate the following: -  Total Number "Requests Per Day" -  Average/Mean "Requests Per Day" -  Standard Deviation "Requests Per Day" I am using the following search: index=... See more...
I am attempting to calculate the following: -  Total Number "Requests Per Day" -  Average/Mean "Requests Per Day" -  Standard Deviation "Requests Per Day" I am using the following search: index=myCoolIndex cluster_name="myCoolCluster" sourcetype=myCoolSourceType label_app=myCoolApp ("\"statusCode\"") | rex .*\"traceId\"\s:\s\"?(?<traceId>.*?)\".* | dedup traceId | rex "(?s)\"statusCode\"\s:\s\"?(?<statusCode>[245]\d{2})\"?" | timechart span=1d count(statusCode) as "Number_Of_Requests" | where Number_Of_Requests > 0 | eventstats mean(Number_Of_Requests) as "Average Requests Per Day" stdev(Number_Of_Requests) as "Standard Deviation" I am getting results back, but am unsure if the results I am getting back are correct per what I am trying to look for.  For instance, I would have thought "stdev()" would need some eval statement to know what the "Total Requests Per Day" and "Average/Mean Requests Per Day" is?   Does the "where Number_Of_Requests > 0" skew the results since those are not getting added to the result set?  Was hoping someone would be able to take a look at my query and provide a little insight as to what I may still need to do so I can get an accurate Standard Deviation.  Also, below is the output I am getting from the current query: Number_Of_Requests Average Requests Per Day Standard Deviation 25687 64395 54741.378572337766 103103 64395 54741.378572337766   Any help is appreciated!  
Hi folks,   Im looking for config of splunk in palo alto Xsoar. im running Splunk ES in Windows server 2012. and i have installed universal forwarder in Windows for log collection (Active Directo... See more...
Hi folks,   Im looking for config of splunk in palo alto Xsoar. im running Splunk ES in Windows server 2012. and i have installed universal forwarder in Windows for log collection (Active Directory).  I would like to configure the splunk instance in palo  alto Xsoar. when i installed the API of Splunk Pycharm, im not able to connect it to splunk server( Entered correct ip address while configuring the instance, allowed inbound rule on windows splunk ES). but i cant connect. can anyone help me?   Thanks in advance.      
Can you please explain how to do partial fit on DensityFunction  I have created a lookup file for a day, say today.. it has fields src_ip, dest_ip, bytes, hod. First created a search as below... See more...
Can you please explain how to do partial fit on DensityFunction  I have created a lookup file for a day, say today.. it has fields src_ip, dest_ip, bytes, hod. First created a search as below: |inputlookup log_21.csv | fit DensityFunction bytes by "hod" into model1 partial_fit=true After that , i have to do partial fit for another lookup with same fields. How to do it? |inputlookup log_22.csv
I think I have a conceptual problem understanding these two commands but in my mind you'd build a model with fit and somehow use that model to forecast (predict) future events right?  But for the lif... See more...
I think I have a conceptual problem understanding these two commands but in my mind you'd build a model with fit and somehow use that model to forecast (predict) future events right?  But for the life of me I can't find any examples of this in practice.  To see it in pseudo-SPL might be something like this:     | mstats avg(some_metric) avg(some_other_metric) avg(yet_another_metric) WHERE index = my_index span=1d | table some_metric some_other_metric yet_another_metric _time | fit Ridge some_metric from some_other_metric yet_another_metric into my_model | predict some_metric as prediction     I guess everything I read and examples I see treat fit and predict as exclusive commands.  So I guess I have a couple questions.   What's the point of a model created with fit?   Do fit and predict work together to forecast?   Thank you to anyone who can clear up my thinking on this.   
Hello, after having redeployed my UF (with a props because my logs are in csv). I end up with my new parsing mixed with the old parsing. Does it speak to you? In my case, I have my logs which end... See more...
Hello, after having redeployed my UF (with a props because my logs are in csv). I end up with my new parsing mixed with the old parsing. Does it speak to you? In my case, I have my logs which end up with a non-existent header (it takes the 1st line of the csv which is a log). And at the same time it also parses correctly (because I modified the problem). THANKS