All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have logs (Azure logs) that have two time fields, StartTime and ExpirationTime. Example: index=azure sourcetype=my_sourcetype | table StartTime ExpirationTime role user I want to take the user... See more...
I have logs (Azure logs) that have two time fields, StartTime and ExpirationTime. Example: index=azure sourcetype=my_sourcetype | table StartTime ExpirationTime role user I want to take the user and see if the user had a failed login attempt in another index / sourcetype between the two time fields StartTime and ExpirationTime. Any help would be greatly appreciated.
I have a subsearch, and am trying to use the value of a field I extracted in an inner search, to check if that value exists anywhere in _raw for the results of my outer search. Current search: inde... See more...
I have a subsearch, and am trying to use the value of a field I extracted in an inner search, to check if that value exists anywhere in _raw for the results of my outer search. Current search: index=my_index  | append      [ searchindex=my_index  "RecievedFileID"     | rex field=_raw  "RecievedFileID\s(?<file_id>\w*)"      | fields file_id ]  | search file_id   I can confirm the regex is working, but cant figure out how to check _raw for any presence of the value of file_id. The logic I'm looking for on the last line is essentially | where _raw contains the value of file_id   Any assistance is appreciated.
Hi guys! I need a help with a time problem. So  my structure is the following: i have many agent installed  on Windows machine that collects some data , i have a heavy forwarder thant handles the ... See more...
Hi guys! I need a help with a time problem. So  my structure is the following: i have many agent installed  on Windows machine that collects some data , i have a heavy forwarder thant handles the universals and forward data to an enterprise instances.  My issue is that one of the server where the universal is installed has a time different from the other machine and from the heavy forwarder, in particular -1h .  So when i use search and alerts in real_time or in 5/10 minutes range i miss all the events related to that machine. I would like all events to take as _time the system time of the enterprise instances (index_time)  or at least the heavy forwader system time.  I tried to change the props.conf and inserting date_config = current at every level but nothing change. It's ok also to have a custom configuration that add + 1h to that specif host  as long as the _time field is alligned with other machine.   Some assumption: All the machine are in the same country, the particular machine has different clock setting and can't be changed. The event that generates the universal contain always a timestamp of the specif machine but we don't want it as _time. 
I'm looking at a very large set of data that separates transactions by product. I've performed some relatively straightforward stats commands on the aggregate data set, but now I'd like to extract a ... See more...
I'm looking at a very large set of data that separates transactions by product. I've performed some relatively straightforward stats commands on the aggregate data set, but now I'd like to extract a similar set of data as it relates to unique accounts.  For example, I want to look at stats related to products, but across unique accounts rather than accounts as a total to give insight into how specific accounts behave. For the purposes of this, let y be product and x be accountHash.  In a splunk query, I could extract the distinct account numbers from the data set by product doing the following:  index=index source=source sourcetype=sourcetype product=productValue  | fields product, accountHash | lookup productCode AS product OUTPUT productDescription | stats  DC(accountHash) as uniqueAccounts by productDescription What if I wanted to look at say, stats count as Volume, avg(transactionValue), etc. across unique accounts? Can I then aggregate the total by productDescription? I know that I could do something like this: index=index source=source sourcetype=sourcetype product=productValue  | fields product, accountHash | lookup productCode AS product OUTPUT productDescription | stats  count as Volume, avg(transactionValue) as avgTranValue by accountHash, product But this would give me a dataset with too many rows to be meaningful. Is it possible to create statistics by unique accountHash values, and then tie those to a product? I don't need to see the individual accountValues, but I'd like to compare statistics across the aggregate total, which would likely skew the statistics towards accounts that use their accounts the most.  Could I do something like  | stats by accountHash And then another stats command that gives me product results across distinct accounts? If the question isn't clear, let me know and I will try to rephrase.  
I'm using dashboard studio and have a geomap, I want to set the colors based on series, which I can do on a line or area using seriesColorsByField, but the documentation for map only has dataColors a... See more...
I'm using dashboard studio and have a geomap, I want to set the colors based on series, which I can do on a line or area using seriesColorsByField, but the documentation for map only has dataColors and seriesColors, both of which appear to be ordered, so if a value is not present the colors would shift. So how can I do similar to seriesColorsByField on a map graph?
I have an embedded pie chart where I'm trying to show something rather than "no results found" with the red exclamation mark, and this is making people think the report isn't working. I've tried seve... See more...
I have an embedded pie chart where I'm trying to show something rather than "no results found" with the red exclamation mark, and this is making people think the report isn't working. I've tried several methods to address this, but I can't get the results I would like.   Query and results: | inputlookup my_lookup | search Exposure=External | stats count by Status | eval pie_slice = count + " " + Status | fields pie_slice, count   Is it possible to add something to the query so that when there are zero results I get this?
Hello Members, Here at the company, we are going to carry out the total migration of Splunk Enterprise, which is currently in AWS Argentina, to AWS Norte Virginia. I would like to ask for some help... See more...
Hello Members, Here at the company, we are going to carry out the total migration of Splunk Enterprise, which is currently in AWS Argentina, to AWS Norte Virginia. I would like to ask for some help here, which are the best practices to follow for this type of migration and/or which server do we start transferring data through, heavy forwarder, indexer, search head, deployment server? Would there be any difference depending on the order? We have a large environment. As a side note, let's take a SnapShot of the environment before starting the migration. We are studying to use CloudEndure for this job, would it be the best option?
Hello  I have a question because I'm in trouble.  `EasyVistaGeneric` "Statut" = "En service" AND ("Identifiant réseau"="IMP*" OR "Identifiant réseau"="ECR*" OR "Identifiant réseau"="PCW*") |dedup... See more...
Hello  I have a question because I'm in trouble.  `EasyVistaGeneric` "Statut" = "En service" AND ("Identifiant réseau"="IMP*" OR "Identifiant réseau"="ECR*" OR "Identifiant réseau"="PCW*") |dedup "Identifiant réseau" |eval entité=mvindex(split('Entité (complète)',"/"),0) | timechart span=1y count by entité useother=f usenull=f   I want to combine the results of  entité : "Commune de Toulon"  + "METROPOLE TPM" +" MTPM" + "Toulon" in a same field that we can named as RESULT : -> so I want to have : RESULT ="Commune de Toulon"  + "METROPOLE TPM" +" MTPM" + "Toulon"   Can you help me please ?    Thanks
Hi Splunkers, Reaching out for help This is a sample _raw event:  12.23.454, abcd, 12.34.45,abc@gmail.com,"[EXTERNAL] 300,000+ software product demos",SEND,OK i want to split  this by using t... See more...
Hi Splunkers, Reaching out for help This is a sample _raw event:  12.23.454, abcd, 12.34.45,abc@gmail.com,"[EXTERNAL] 300,000+ software product demos",SEND,OK i want to split  this by using the split command ,  using  comma as a delimiter  and assign to different fields. However,  "EXTERNAL] 300,000+ software product demos"  is a single field   and i dont want it to be split into multiple fields  In few  other events, comma is not present . For instance: 12.23.454, abcd, 12.34.45,abc@gmail.com,  "[EXTERNAL] 300000+ software product demos"  ,SEND,OK   How do i ensure that these values are assigned to the field in the events.  "EXTERNAL] 300,000+ software product demos" "[EXTERNAL] 300000+ software product demos"   Thanks for your help         
Hello I have a question I am working on this map. However, when "there are no resulst returned", I want to have the empty map and not this :    What can I do this ?  <dashboard... See more...
Hello I have a question I am working on this map. However, when "there are no resulst returned", I want to have the empty map and not this :    What can I do this ?  <dashboard version="1.1"> <label>HPE IMC</label> <row> <panel> <title>La liste des alarmes</title> <viz type="location_tracker_app.location_tracker"> <search> <query>index="imcfault" sourcetype="st_imcfault" severity=3 OR severity=4 | lookup switchs.csv ip AS sourceIp | rex field=location "^(?&lt;latitude&gt;.+?), (?&lt;longitude&gt;.+?)$" | eval latitude=if(isnull(latitude),"43.123888",latitude) | eval longitude=if(isnull(longitude),"5.953356",longitude) | table _time latitude longitude faultDesc</query> <earliest>-15m</earliest> <latest>now</latest> </search> <option name="height">800</option> <option name="location_tracker_app.location_tracker.interval">10</option> <option name="location_tracker_app.location_tracker.showTraces">0</option> <option name="location_tracker_app.location_tracker.staticIcon">none</option> <option name="location_tracker_app.location_tracker.tileSet">light_tiles</option> <option name="refresh.display">progressbar</option> </viz> </panel> </row> </dashboard>  
I have a situation where I have a multi-value field that can contain anywhere from 1 to 2000 or more values in a day.  each value is exactly 38 characters long.  Each 38 character string is a GUID fo... See more...
I have a situation where I have a multi-value field that can contain anywhere from 1 to 2000 or more values in a day.  each value is exactly 38 characters long.  Each 38 character string is a GUID for another application, and that application can only accept up to 1000 characters at a time.  What I'd like to do is chunk the strings together in complete blocks of 20 which would be 760 characters per block, and then call them by mvindex, but I haven't figured out how to do this in eval so that it will create it whether I have 1 string, 23 strings, or 900 strings to evaluate, since that is always going to be the unknown variable.  Any assistance on how to solve this would be very helpful.    
Does anyone have ALERT MANAGER APP. I have the Alert Manager app 3.0.11. On the incident posture dashboard next to the alert that triggered there are options, run incident in search, edit incident, a... See more...
Does anyone have ALERT MANAGER APP. I have the Alert Manager app 3.0.11. On the incident posture dashboard next to the alert that triggered there are options, run incident in search, edit incident, assign to me and a lightening bolt. When I select edit incident a window pops up and there is another option drop down to assign a Owner. As I pull down the drop down to select an owner all of the users names are on a white background and I can not see them unless I scroll over the white space. Does anyone know how to change the background color to black for the dropdown, so I can see usernames in the dropdown.
Our Dev Splunk instance was recently upgraded from Splunk Enterprise 8.2.2.1 to 9.0.2. I am getting the following error on our primary Search Head from python.log on splunkd restart: ERROR config... See more...
Our Dev Splunk instance was recently upgraded from Splunk Enterprise 8.2.2.1 to 9.0.2. I am getting the following error on our primary Search Head from python.log on splunkd restart: ERROR config:149 - [HTTP 401] Client is not authenticated Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/config.py", line 147, in getServerZoneInfoNoMem return times.getServerZoneinfo() File "/opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/times.py", line 163, in getServerZoneinfo serverStatus, serverResp = splunk.rest.simpleRequest('/search/timeparser/tz', sessionKey=sessionKey) File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 625, in simpleRequest raise splunk.AuthenticationFailed splunk.AuthenticationFailed: [HTTP 401] Client is not authenticated I did a re-scan of everything using the newest version of the Upgrade Readiness App (off Splunkbase). Some apps did have a Python warning. I verified currently installed versions of each app (with the exception of Splunk Enterprise package-included apps like Splunk Secure Gateway and Splunk RapidDiag) and the documentation states our installed versions are compatible with Enterprise 9.0. It does not appear that any installed apps are using a deprecated version of python. I also ran the following command and verified our python version as Python 3.7.11: splunk cmd python -V After combing over known issues for the 9.0 release and other Answers threads I’ve had no luck. I don’t know if this errors is meaningful so direction would be advised. Thank you!
index=data severity IN ("critical","high","medium","low") | eval TopHost = [ search index=tenable severity IN ("critical","high","medium","low") | where len(dnsName)>0 | dedup dnsName,solution |... See more...
index=data severity IN ("critical","high","medium","low") | eval TopHost = [ search index=tenable severity IN ("critical","high","medium","low") | where len(dnsName)>0 | dedup dnsName,solution | dedup dnsName,pluginText | rex field=pluginName "^(?<VulnName>(?:\w+\s+){2})" | dedup dnsName,VulnName | top limit=1 dnsName | rename dnsName as query | fields query | head 1] | where dnsName=TopHost | table dnsName, ip   My query above works, but missing one thing. Right now it is getting the first result ( using head command ). I am trying to do first 5 results and store that to my eval variable. I tried to change head 5 but got errors. Any help is appreciated. Thanks Attached error
Hello, I have a .csv file with 2 columns: IoC and added_timestamp I did compare the data and I get a few matches, but what I want is to use just a portion of the .csv. Based on added_timestamp co... See more...
Hello, I have a .csv file with 2 columns: IoC and added_timestamp I did compare the data and I get a few matches, but what I want is to use just a portion of the .csv. Based on added_timestamp column I want to compare the IoC added in .csv in the last 7 days. Can someone help me to accomplish this ? Thank you in advance.
Hi Team,  Seeing this error when I go into the Mission Control App.  Any ideas? Thanks, Mike
Hi team, I heard that MC went GA.  Congratulations!  So, now that this is GA, is this our perm SOAR instance and to confirm that we have 100 actions per day as part of using MC?
I am attempting to calculate the following: -  Total Number "Requests Per Day" -  Average/Mean "Requests Per Day" -  Standard Deviation "Requests Per Day" I am using the following search: index=... See more...
I am attempting to calculate the following: -  Total Number "Requests Per Day" -  Average/Mean "Requests Per Day" -  Standard Deviation "Requests Per Day" I am using the following search: index=myCoolIndex cluster_name="myCoolCluster" sourcetype=myCoolSourceType label_app=myCoolApp ("\"statusCode\"") | rex .*\"traceId\"\s:\s\"?(?<traceId>.*?)\".* | dedup traceId | rex "(?s)\"statusCode\"\s:\s\"?(?<statusCode>[245]\d{2})\"?" | timechart span=1d count(statusCode) as "Number_Of_Requests" | where Number_Of_Requests > 0 | eventstats mean(Number_Of_Requests) as "Average Requests Per Day" stdev(Number_Of_Requests) as "Standard Deviation" I am getting results back, but am unsure if the results I am getting back are correct per what I am trying to look for.  For instance, I would have thought "stdev()" would need some eval statement to know what the "Total Requests Per Day" and "Average/Mean Requests Per Day" is?   Does the "where Number_Of_Requests > 0" skew the results since those are not getting added to the result set?  Was hoping someone would be able to take a look at my query and provide a little insight as to what I may still need to do so I can get an accurate Standard Deviation.  Also, below is the output I am getting from the current query: Number_Of_Requests Average Requests Per Day Standard Deviation 25687 64395 54741.378572337766 103103 64395 54741.378572337766   Any help is appreciated!  
Hi folks,   Im looking for config of splunk in palo alto Xsoar. im running Splunk ES in Windows server 2012. and i have installed universal forwarder in Windows for log collection (Active Directo... See more...
Hi folks,   Im looking for config of splunk in palo alto Xsoar. im running Splunk ES in Windows server 2012. and i have installed universal forwarder in Windows for log collection (Active Directory).  I would like to configure the splunk instance in palo  alto Xsoar. when i installed the API of Splunk Pycharm, im not able to connect it to splunk server( Entered correct ip address while configuring the instance, allowed inbound rule on windows splunk ES). but i cant connect. can anyone help me?   Thanks in advance.      
Can you please explain how to do partial fit on DensityFunction  I have created a lookup file for a day, say today.. it has fields src_ip, dest_ip, bytes, hod. First created a search as below... See more...
Can you please explain how to do partial fit on DensityFunction  I have created a lookup file for a day, say today.. it has fields src_ip, dest_ip, bytes, hod. First created a search as below: |inputlookup log_21.csv | fit DensityFunction bytes by "hod" into model1 partial_fit=true After that , i have to do partial fit for another lookup with same fields. How to do it? |inputlookup log_22.csv