All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Okay @PotatoDataUser , so you have created the KPI but it isnt populating? Are you able to see any data for that KPI in itsi_summary index?
Hi @SN1  Some good answers here, its worth noting that for me  | rest /services/server/status/partitions-space doesnt give me the right data, and it can depend on how your partitions are configure... See more...
Hi @SN1  Some good answers here, its worth noting that for me  | rest /services/server/status/partitions-space doesnt give me the right data, and it can depend on how your partitions are configured (e.g. multiple partitions for hot/warm/cold etc) If you're using Linux then its worth also checking something as simple as in the linux command line df -h This will list all the filesystems on the server and show you the size, used and available disk space. I'd definitely recommend setting up some proper monitoring using the Splunk TA for *Nix to cover your servers and cover all partitions and filesystems. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @livehybrid , For now I just want the sum of counts of all channels. I want to utilize the sum functionality of the KPI builder rather than modifying the query. The only way I know how to do... See more...
Hi @livehybrid , For now I just want the sum of counts of all channels. I want to utilize the sum functionality of the KPI builder rather than modifying the query. The only way I know how to do it for individual channels is to just modify the query searching for the said channel. I would really appreciate any alternative method on this. Thanks.
Hi @PotatoDataUser  Are you wanting to break it down by Channel? Or are you looking for just a sum of all channels? Please let me know how you get on and consider adding karma to this or any other ... See more...
Hi @PotatoDataUser  Are you wanting to break it down by Channel? Or are you looking for just a sum of all channels? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I have been having some trouble with Generic KPI setup in splunk ITSI I have a query that returns data in the form of Channel       Count Channel1    1000 Channel2     800 Channel3     1200  a... See more...
I have been having some trouble with Generic KPI setup in splunk ITSI I have a query that returns data in the form of Channel       Count Channel1    1000 Channel2     800 Channel3     1200  and so on So I wanted to setup a KPI that runs this query with the alert value being sum of all the "Count", heres how I configured it. I enabled a 7 day backfill, I dont have any split by entity rules I am able to see the alert value is being captured in the generated search from the KPI builder. But i am unable to see any KPI data or values being captured even when I let it sit for a while. please help me with the setup. TIA
I found that I had a typo! 
Using your SPL, I expanded it a bit to be closer to my dataset. This new SPL is not working whereas the one you provided does in deed work. Maybe it is something stupid but I'm stumped! | makeresult... See more...
Using your SPL, I expanded it a bit to be closer to my dataset. This new SPL is not working whereas the one you provided does in deed work. Maybe it is something stupid but I'm stumped! | makeresults | eval json="[{ \"data\": { \"tree\": { \"branch\": { \"common\": { \"type\": \"Apple\" }, \"apple\": { \"color\": \"red\" } } } } }]" | eval events=json_array_to_mv(json) | mvexpand events | eval _raw=events | fields _raw | spath "data.tree.branch.common.type" output=TypeTemp | eval type = lower(TypeTemp) | eval colorPath="data.tree.branch." . type . ".color " | eval color=json_extract(_raw,colorPath) | table _time, color, type, colorPath Suggestions welcome once again.
Looks like there has been a change in styling (CSS) with the Splunk update. This should do the trick (note the styling for MasterRow):  <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <ti... See more...
Looks like there has been a change in styling (CSS) with the Splunk update. This should do the trick (note the styling for MasterRow):  <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #MasterRow{display: block;} #Panel1{width:15% !important;} #Panel2{width:85% !important;} </style> </html> </panel> <panel id="Panel1">....</panel> <panel id="Panel2">....</panel> </row>  
HI @cking2600  Whilst the app might work find with Splunk 9.4, its not necessarily possible for other users to test all variations of using the app - in other words, it might work fine for 9.4 for y... See more...
HI @cking2600  Whilst the app might work find with Splunk 9.4, its not necessarily possible for other users to test all variations of using the app - in other words, it might work fine for 9.4 for you, but your mileage may vary... I think the best thing to do would be to contact Cisco's support (support@cisco.com) or Cisco Worldwide Support - https://www.cisco.com/c/en/us/support/web/tsd-cisco-worldwide-contacts.html and enquire directly as they may have more information - of course, it could just be an oversight and they havent ticked the box for 9.4 in the Splunkbase app config page Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
The difference between disk usage and memory has already been pointed out. There is also one more thing worth noting - the disk utilization on indexers is usually managed by adjusting retention para... See more...
The difference between disk usage and memory has already been pointed out. There is also one more thing worth noting - the disk utilization on indexers is usually managed by adjusting retention parameters (you might also get some additional usage from knowledge bundles and intermediate results but they are rarely very significant). And the memory usage can vary greatly depending on the current load at the time of checking since memory is used mostly for searching. So the more more complicated searches you're running at any given moment, the higher memory usage.
Well... TA_nix without careful tweaking what it reports can be a handful. It's just a bunch of ziptie and duct-tape connected scripts giving you some relatively unfriendly output. And if you just ins... See more...
Well... TA_nix without careful tweaking what it reports can be a handful. It's just a bunch of ziptie and duct-tape connected scripts giving you some relatively unfriendly output. And if you just install it and enable all inputs, it can get noisy.
As @gcusello already pointed out - if your subsearch reaches to the same index as the outer search, it's pointless to do it this way. Limiting the base search is enough. But if you want to correlate... See more...
As @gcusello already pointed out - if your subsearch reaches to the same index as the outer search, it's pointless to do it this way. Limiting the base search is enough. But if you want to correlate events from two different indexes, you'd need to search from both indexes at the same time (join conditions with OR) and then do some stats by common field. Anyway, | stats count by entity_id | fields entity_id doesn't make much sense. It's enough to do | stats values(entity_id) as entity_id  
1. BOM just adds three bytes at the beginning of the file. That's it. It doesn't change anything within the utf-8 encoding. 2. Did you verify that those bytes are added on export? 3. Did you verify... See more...
1. BOM just adds three bytes at the beginning of the file. That's it. It doesn't change anything within the utf-8 encoding. 2. Did you verify that those bytes are added on export? 3. Did you verify how those bytes are exported? (see the raw file contents and check the actual code points) 4. Excel isn't always right, you know?
I want to  convert UTF-8 text to UTF8 with BOM text
OK. So your transform assigning index based on source does work for the same data?
The only thing happening on this Heavy Forwarder is collecting logs, assigning an index based on the source using transforms.conf and props.conf, and then forwarding them to the indexer cluster.
I want to use the splunk app for lookup file editing to export a csv lookup file and automatically apply utf8-bom encoding when opening the downloaded file on my local PC. How can I do this? I wen... See more...
I want to use the splunk app for lookup file editing to export a csv lookup file and automatically apply utf8-bom encoding when opening the downloaded file on my local PC. How can I do this? I went into that path and modified lookup_editor_rest_handler.py and added a utf8-BOM stream to csv_data, but that setting is not applied. When the user opens the exported Excel file on a local PC, a problem occurs where non-English characters are encoded incorrectly. $SPLUNK_HOME/etc/apps/lookup_editor/bin/lookup_editor_rest_handler.py     import codecs def post_lookup_as_file(self, request_info, lookup_file=None, namespace="lookup_editor", owner=None, lookup_type='csv', **kwargs): self.logger.info("Exporting lookup, namespace=%s, lookup=%s, type=%s, owner=%s", namespace, lookup_file, lookup_type, owner) try: # If we are getting the CSV, then just pipe the file to the user if lookup_type == "csv": with self.lookup_editor.get_lookup(request_info.session_key, lookup_file, namespace, owner) as csv_file_handle: csv_data = csv_file_handle.read() csv_data = codecs.BOM_UTF8.decode('utf-8')+csv_data # If we are getting a KV store lookup, then convert it to a CSV file else: rows = self.lookup_editor.get_kv_lookup(request_info.session_key, lookup_file, namespace, owner) csv_data = shortcuts.convert_array_to_csv(rows) return { 'payload': csv_data, # Payload of the request. 'status': 200, # HTTP status code 'headers': { 'Content-Type': 'text/csv; charset=UTF-8', 'Content-Disposition': f'attachment; filename*=UTF-8\'\'{lookup_file}' }, } except (IOError, ResourceNotFound): return self.render_error_json("Unable to find the lookup", 404) except (AuthorizationFailed, PermissionDeniedException): return self.render_error_json("You do not have permission to perform this operation", 403) except Exception as e: self.logger.exception("Export lookup: details=%s", e) return self.render_error_json("Something went wrong!")      
hi @yuanliu , I don't need select All in my classic dashboard But I need select All Matches:    
I have made a demo Dockerfile to reproduce the problem https://github.com/fabarea/appdynamics-php-alpine-example.git Notice: I have not included the appdynamics php agent file in the repo (appdynam... See more...
I have made a demo Dockerfile to reproduce the problem https://github.com/fabarea/appdynamics-php-alpine-example.git Notice: I have not included the appdynamics php agent file in the repo (appdynamics-php-agent-x64-linux-24.11.0.1340.tar.bz2). I am not sure it is allowed. It can be downloaded from appdynamics however To answer your questions: The container is run as root for now, so there is no permission issue. I am limiting the example to the PHP CLI for now.  
Hi @dzhangw7 , why are you using a subsearch? you can put all the conditions in the main search: index=my_index "Check something" (("Extracted entities" AND "'date': None") OR extracted_entities.d... See more...
Hi @dzhangw7 , why are you using a subsearch? you can put all the conditions in the main search: index=my_index "Check something" (("Extracted entities" AND "'date': None") OR extracted_entities.date=null) | timechart count by classification eventually adding a condition on identity_id index=my_index "Check something" (("Extracted entities" AND "'date': None") OR extracted_entities.date=null) identity_id=* | timechart count by classification Ciao. Giuseppe