All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I would appreciate it if someone could assist me with a problem. The events appearing in the indexer on Splunk Cloud are exceeding my license limit. Is there a way to redirect unwanted events to... See more...
Hi, I would appreciate it if someone could assist me with a problem. The events appearing in the indexer on Splunk Cloud are exceeding my license limit. Is there a way to redirect unwanted events to a null queue?
Hi! I have written a PowerShell Script to obtain Hard-Disk Informations for Local Drives and report it to Splunk.     If(Get-Command -Name 'Get-CimInstance' -ErrorAction SilentlyContinue) { ... See more...
Hi! I have written a PowerShell Script to obtain Hard-Disk Informations for Local Drives and report it to Splunk.     If(Get-Command -Name 'Get-CimInstance' -ErrorAction SilentlyContinue) { $Drives = Get-CimInstance -Query 'SELECT * FROM Win32_LogicalDisk WHERE DriveType=3' -QueryDialect 'WQL' } Else { $Drives = Get-WmiObject -Query 'SELECT * FROM Win32_LogicalDisk WHERE DriveType=3' } $Drives | ForEach-Object { $Drive = $_ | Select-Object FreeSpace,Size,FileSystem,VolumeSerialNumber,PercentFree,UsedGB,FreeGB,@{Name='DriveLetter';Expression={ $_.DeviceID }},IsSystemdrive $Drive.PercentFree = [Math]::Round(($Drive.FreeSpace / $Drive.Size * 100),2) $Drive.UsedGB = [Math]::Round((($Drive.Size - $Drive.FreeSpace) / 1GB),2) $Drive.FreeGB = [Math]::Round(($Drive.FreeSpace / 1GB),2) If($Drive.DriveLetter -eq $env:SystemDrive) { $Drive.IsSystemdrive = $true } Else { $Drive.IsSystemdrive = $false } $Drive }     This gives me  the following result in Splunk for a System  with Harddisk C and D      FreeSpace : 37910388835 Size : 106847793152 FileSystem : NTFS VolumeSerialNumber : 64A9A098 PercentFree : 53,54 UsedGB : 46,23 FreeGB : 53,28 DriveLetter : C: IsSystemdrive : True FreeSpace : 27610488832 Size : 268432306176 FileSystem : NTFS VolumeSerialNumber : E2651A32 PercentFree : 10,29 UsedGB : 224,28 FreeGB : 25,71 DriveLetter : D: IsSystemdrive : False     My Query skills are not the best ... How can I separate the PowerShell objects (Disk C: and D:) in a query? For example; To monitor the system drive only?   This discussion is for learning, how to parse such PowerShell objects. Not to use other workarounds.) I'm also grateful if someone has tips on how to better prepare (separate) the PowerShell objects for Splunk searches.
Splunk Add-on for Box Hi folks, I tried to integrate my Box account with Splunk using Splunk add-on for Box. But I am receiving a 403 error. I have done the same configuration 2-weeks ago and ... See more...
Splunk Add-on for Box Hi folks, I tried to integrate my Box account with Splunk using Splunk add-on for Box. But I am receiving a 403 error. I have done the same configuration 2-weeks ago and it was successfully configured, and I was able to see logs. Is there any Box API update or this is not the expected behavior. I am recieving the same error on both SPlunk Enterprise and cloud for both personal as well as enterprise version of Box.
I had a tabular chart which has component, basket and age column. But the problem was I had same component with different basket values and different ageing as different rows as shown below, which wa... See more...
I had a tabular chart which has component, basket and age column. But the problem was I had same component with different basket values and different ageing as different rows as shown below, which was duplicating my rows with same component name So, My query was to use to combine the same component as a single row with multiple basket values and display the ageing of each component with respect to the basket,so i used the below code But the output was this but, it turns out that my values for the combined rows (as a single row) is missing, few are reversed and incorrect can anyone help out please? by values of that rows what supposed to be this  
All, What does it mean when there are multiple HttpServlet:service() and dispatcherServlet:doService() nodes in the call graph of a transaction snapshot?  For example: Call graph showing multip... See more...
All, What does it mean when there are multiple HttpServlet:service() and dispatcherServlet:doService() nodes in the call graph of a transaction snapshot?  For example: Call graph showing multiple calls to HttpService.service. The service does use multiple threads. thanks
I am trying to eventually get to the point where I can add this to props.conf but am trying out the searches in splunk first to make sure they work.   I was following this example but it wasn't work ... See more...
I am trying to eventually get to the point where I can add this to props.conf but am trying out the searches in splunk first to make sure they work.   I was following this example but it wasn't work for me so I backed it up a bit and simplified it If I run this search, it works and converts all instances abc to def....  | rex field=query mode=sed "s/abc/def/" However, when I do this, it doesn't throw an error but doesn't convert anything, all abc's are still present in the fields.. | rex mode=sed "s/abc/def/" Been driving me nuts trying to figure out why.          
I have a data like this , and i want to display the step with maximum value. Detail: { [-] Id: 12345678 RequestCompleteTS: 2023-04-27T15:59:30.6960113-04:00 Steps: { [-] 0-step1: 32 0-step2: 1... See more...
I have a data like this , and i want to display the step with maximum value. Detail: { [-] Id: 12345678 RequestCompleteTS: 2023-04-27T15:59:30.6960113-04:00 Steps: { [-] 0-step1: 32 0-step2: 15 3-step3: 33 4-step4: 49 5-step5: 15 6-step6: 9 7-step7: 8 } StepsCnt: 18 TargetRegion: BRD } LogType: Info Message: Success Time: 2023-04-27 15:59:30.696--04:00 Desired Output Id Step that is taking maximum time 12345678 4-step4
Hi splunk community   I am currently trying to break up a log. It is in this format after converting to a json  Each plus under response has a block of information with several variables. ... See more...
Hi splunk community   I am currently trying to break up a log. It is in this format after converting to a json  Each plus under response has a block of information with several variables. I need splunk to pull out the values of the variables i tell it to but grouped together. I tried breaking this up using MV expand but when I do it groups up the names in one log and the results which make it difficult to graph. an example of how it looks is below. the below format doesnt work since every name variable will have the same output when graphed because every single group is one "log" which makes insights difficult. I need it to do something like this  the search that i have been using is below      index=myindex attrs.deploymentKey="production" "MY COPY" "MY ROUTER*" | spath input=line | tojson auto(line) | spath path=line.additionalInfo{} | eval resp=mvindex('line.additionalInfo{}', 0,2) | mvexpand data | spath input=data output=my_name path=response{}.NAME | spath input=data output=my_results path=response{}.Results | where my_results = "Y" | table my_name, my_results     . any help would be much appreciated   
Hello, We have recently moved over to Splunk Cloud platform and I am making a dashboard that will have panels for each of our reporting servers/tools. So for example the dashboard will have a panel... See more...
Hello, We have recently moved over to Splunk Cloud platform and I am making a dashboard that will have panels for each of our reporting servers/tools. So for example the dashboard will have a panel to show all IPS devices reporting in, all Proxies, all windows servers etc. I have created a query to show all proxies reporting in over the week, along with a timewarp to show the difference from the week before.   index="siem-proxy" source="global" |timechart dc(an) | rename dc(an) as "Proxy" | timewrap 1w | rename "Proxy_1week_before" as "Proxy Previous Week" | rename "Proxy_latest_week" as "Proxy Latest"   This search goes through millions of events to show 15 proxies have reported in per day, so its very slow running. Is there an easy way to make this more efficient?   Cheers
Hey Splunkers , How can I get the splunk to use time from source and use it as _time Following are the two files it uses.One has date and time one has only date.   /project/admin/sv/re/sniff/... See more...
Hey Splunkers , How can I get the splunk to use time from source and use it as _time Following are the two files it uses.One has date and time one has only date.   /project/admin/sv/re/sniff/pre/logs/2022-12-16T11-57-36/status /project/aadmin/sv/re/sniff/pre/logs/2022-12-16/status   HOw do I write props and transforms for it   Thanks in Advance
I have a table that has the following fields: IP Host_Auth _time  The _time field shows the time the host was authenticated against for the current week and the previous.  How can I comp... See more...
I have a table that has the following fields: IP Host_Auth _time  The _time field shows the time the host was authenticated against for the current week and the previous.  How can I compare the Host_Auth field from the last two results for the same host? If the value for Host_Auth for a particular IP was successful last week by not this week, how can I show that? Eample IP                        Host_Auth                         _time  1.1.1.1               Unix Successful             2023-04-23 00:00:00           1.1.1.1                Unix Successful            2023-04-16 00:00:00   2.2.2.2                Unix Failed                       2023-04-23 00:00:00  2.2.2.2                 Unix Successful            2023-04-16 00:00:00 
In Splunk Cloud if we run out of storage with daily ingestion in DDAS, the storage automatically gets expanded and we are charged for the additional DDAS units at the end of each quarter we are above... See more...
In Splunk Cloud if we run out of storage with daily ingestion in DDAS, the storage automatically gets expanded and we are charged for the additional DDAS units at the end of each quarter we are above our licensed DDAS limit. What are some best practices to keep the DDAS limits in check & prevent from exceeding the subscribed limit?
Hi guys,  one question.  We have a midsize Splunk environement. Data which is delivered to be ingested is increasing.  We need an architecture where we can handle our high performance data and ... See more...
Hi guys,  one question.  We have a midsize Splunk environement. Data which is delivered to be ingested is increasing.  We need an architecture where we can handle our high performance data and on the other hand the normal data.  High performance data: high amount of data which needs to be ingested very fast ingested AND is under heavy search load from a specific known user group.  Are there any sugestions.  An idea is to separate data ingestion into differents streams like this : +--------------------------------------------------------------+ | Loadbalancer (Ingress) | +--------------------------------------------------------------+ | | | +-----------------------+ +-----------------------+ +-----------------------+ | Forwarder Grp 1 / HEC | | Forwarder Grp 2 / HEC | |Forwarder Grp 3 / HEC | +-----------------------+ +-----------------------+ +-----------------------+ | | | +-----------------------+ +-----------------------+ +-----------------------+ | Indexer Cluster 1| | Indexer Cluster 2| | Indexer Cluster 3 | | (High-Performance IDX)| | (Normal IDX) | | High-Performance IDX)| +-----------------------+ +-----------------------+ +-----------------------+ | | | +-----------------------+ +-----------------------+ +-----------------------+ | Search Head Cluster | | Search Head Cluster | | Search Head Cluster | | for Power Users | | for OpenShift | | for Normal Users | +-----------------------+ +-----------------------+ +-----------------------+ | | | +-----------------------+ +-----------------------+ +-----------------------+ | Loadbalancer (SH1) | | Loadbalancer (SH2) | | Loadbalancer (SH3) | +-----------------------+ +-----------------------+ +-----------------------+ is this realisable ? Are there reference architectures with detailed descriptions about the other components and config items.  Best regards from switzerland Sascha 
Hi All, we have 4 env, sit1, sit2, pat1 and pat2,  We have lookup table from long time,  last 4 months back we have added some data into it, and we verified data was added . but we are not sure wh... See more...
Hi All, we have 4 env, sit1, sit2, pat1 and pat2,  We have lookup table from long time,  last 4 months back we have added some data into it, and we verified data was added . but we are not sure when it got disappear, now we are not able to see that added data in the look up. what could be the reason for missing? 
We are trying to ingest data from csv files. We have a monitoring stanza in inputs.conf which monitors all csv in a folder. Copied one file to that folder and data got ingested. After that tried cop... See more...
We are trying to ingest data from csv files. We have a monitoring stanza in inputs.conf which monitors all csv in a folder. Copied one file to that folder and data got ingested. After that tried copying new files to that folder but it stopped ingesting. New file is quite different than previous one. Have also tried different index/props, but same issue New files added to the directory are not getting ingested until you restart Splunk. Below is the monitoring stanza and props that we used. The inputs and props are in Heavy Forwarder and it is sending data to indexer cluster.      [monitor:///f1/f2/f3/*.csv] disabled = 0 index = test_input sourcetype = test initCrcLength = 2048 _TCP_ROUTING = test_indexer crcSalt =<SOURCE>     Below is the props.     [test] INDEXED_EXTRACTIONS = csv CHECK_FOR_HEADER = true HEADER_FIELD_LINE_NUMBER = 1 TIMESTAMP_FIELDS = mytime TIME_FORMAT = %Y-%m-%d %H:%M:%S FIELD_DELIMITER = , KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Structured description = Comma-separated value format. Set header and other settings in "Delimited Settings" disabled = false pulldown_type = true      
Hi All, I am doing a search for src_ip and DestAdd in a database within a 1 minute time frame. I need to look for src_ip which value that is not greater than 1 and DestAdd that is not greater than ... See more...
Hi All, I am doing a search for src_ip and DestAdd in a database within a 1 minute time frame. I need to look for src_ip which value that is not greater than 1 and DestAdd that is not greater than 5. Here is the description of the problem: when any of these with the same source IP more than 1 time, across more than 5 destination IP within 1 minutes. I wonder if my query correct. Can anyone advise? Thanks    |bin span=1m _time |stats count(src_ip) as src_ip, count(DestAddress) as DestAddress by _time |where (src_ip > 1 and DestAddress>5)  
I am running into an issue where my sourcetype is in the wrong destination app. What can I do to fix this? via splunk cloud
Hi community,  I have the follow search that return two number for today and yesterday device count per index xyz but i'm not able to visualize both them as show below the "today" is missing ... See more...
Hi community,  I have the follow search that return two number for today and yesterday device count per index xyz but i'm not able to visualize both them as show below the "today" is missing This is the search:       | tstats dc(host) where index=*01* earliest=-1d@d latest=-0d@d | multikv | eval TimeWindow="yesterday" | append [tstats dc(host) where index=*01* earliest=-0d@d latest=now | multikv | eval TimeWindow="today"]         Could you please help me to understand how to have a result like this indication today vs yesterday? Thank you Roby      
Hi  It's seems index time extractions for audittrail is not supported via the traditional props, transforms.   Is this expected behavior and is there an approach that will allow to index a field ... See more...
Hi  It's seems index time extractions for audittrail is not supported via the traditional props, transforms.   Is this expected behavior and is there an approach that will allow to index a field from the audit.log? Thankyou
Hi, I have not received any response from Cisco directly on this topic so I thought I would try here. I am cleaning up a messy syslog pipeline containing all sorts of devices, including Cisco. I w... See more...
Hi, I have not received any response from Cisco directly on this topic so I thought I would try here. I am cleaning up a messy syslog pipeline containing all sorts of devices, including Cisco. I want to throw everything Cisco in 1 index.  But I am not sure Cisco syslog formats are same across all iOS devices: switches, routers, etc.  I would assume it would be or very compatible, syslog/CEF...  Can anyone confirm or speak to this question? Ideally I will move to using SC4S but in the meantime I want to cleanup the existing and use available TAs to parse/format the data. Any advice appreciated. Thank you