All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In one of the file the password seems to be present and the same has been getting ingested into Splunk. So is there any way to hash the password for that field so that it wont be visible in Splunk du... See more...
In one of the file the password seems to be present and the same has been getting ingested into Splunk. So is there any way to hash the password for that field so that it wont be visible in Splunk during searching eventhough it is present in the logs.
Hi splunker, I would like to create a python custom commands to write results of SPL commands in a CSV file. this is an example of what i want to have: 1 - in Splunk ( version 8.0.2): ...( so... See more...
Hi splunker, I would like to create a python custom commands to write results of SPL commands in a CSV file. this is an example of what i want to have: 1 - in Splunk ( version 8.0.2): ...( some spl commands) | table fields1, fields2, fields3 2 - I would then take the table results of the SPL commands, and write the results in a CSV file in an append mode: => if one line exists in the file, do not do anything, else, write the lew line in the file (that is the main goal*) this is the python code I wrote: #!/usr/bin/env python3 import sys, csv from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators @Configuration() class mycommandCommand(StreamingCommand): """ %(synopsis) ##Syntax %(syntax) ##Description %(description) """ def stream(self, events): # Put your event transformation code here mycv = {} for event in events: mycv['field1'] = event["field1"] mycv['field2'] = event["field2"] mycv['field3'] = event["field3"] csv_file = "tmp/Names.csv" csv_columns = ['field1','field2','field3'] try: with open(csv_file, 'a') as csvfile: writer = csv.DictWriter(csvfile, fieldnames=csv_columns, delimiter=";") writer.writeheader() for data in mycv.items(): writer.writerows(data) except IOError: print("I/O error") yield event dispatch(mycommandCommand, sys.argv, sys.stdin, sys.stdout, __name__) this is the commands.conf: [mycommand] filename=mycommand.py enableheader = true outputheader = true requires_srinfo = true stderr_dest = message supports_getinfo = true supports_rawargs = true supports_multivalues = true streaming = true some help ??? I thank in advance,
how can i make trellis results to fit in the panel , even one results if present instead of having empty space , i want to make it to occupy entire panel. Is there any option to auto fit even one res... See more...
how can i make trellis results to fit in the panel , even one results if present instead of having empty space , i want to make it to occupy entire panel. Is there any option to auto fit even one results are available?
Hi everyone, I have an issue in splunk UF installation in windows regarding the user, previously i did all the UF installation by splunk service account (domain account) for all windows servers a... See more...
Hi everyone, I have an issue in splunk UF installation in windows regarding the user, previously i did all the UF installation by splunk service account (domain account) for all windows servers and now i forgot the password of the service account and i'm not able to install by local account (company policy), so my question is if i rest the password of service account does that will affects the others UFs behavior?
I wanted to ask here before making this change, for just another set of eyes. Issue. We have /hot and /cold both with equal amounts of storage, with no difference between the storage speed on eith... See more...
I wanted to ask here before making this change, for just another set of eyes. Issue. We have /hot and /cold both with equal amounts of storage, with no difference between the storage speed on either volume. Currently data is rolling to cold at 90 days, and so cold is filling up and leaving hot about 20% full. I'd like to set the following to try and keep data in hot/warm for almost 1/2 of our global 13 month retention period. Do the following settings make sense? [default] #######retentions and hotwarm limits####### repFactor = auto #To balance disk space keep warm more warm buckets that the default 300. maxWarmDBCount = 3600 #Idle hot buckets roll to warm if no data is written to them in a day. maxHotIdleSecs = 86400 #Upper bound of timespan of hot/warm buckets, in seconds. maxHotSpanSecs = 15778476 #13 Months and data will roll to bitbucket unless a frozen directory is specified in their stanza. frozenTimePeriodInSecs = 34136000 #Data coming in on an unconfigured index will land in sandbox. lastChanceIndex = sandbox Thanks.
Hi! I want to use a tstats search to monitor for network scanning attempts from a particular subnet: | tstats `summariesonly` dc(All_Traffic.dest) as dest_count from datamodel=Network_Traffic.... See more...
Hi! I want to use a tstats search to monitor for network scanning attempts from a particular subnet: | tstats `summariesonly` dc(All_Traffic.dest) as dest_count from datamodel=Network_Traffic.All_Traffic where (All_Traffic.dest="10.*" OR All_Traffic.dest="172.*" OR All_Traffic.dest="192.168.*") AND All_Traffic.src=10.128.0.0/16 by All_Traffic.src | sort - dest_count | where dest_count > 70 My index2 contains IP addresses and users (src_ip, user and event contains text string "LOCAL") that I would like to match with the All_Traffic.src IP addresses, so I would get the last user name that used the particular All_Traffic.src in the results. I have tried both join and map with no success: | tstats `summariesonly` dc(All_Traffic.dest) as dest_count from datamodel=Network_Traffic.All_Traffic where (All_Traffic.dest="10.*" OR All_Traffic.dest="172.*" OR All_Traffic.dest="192.168.*") AND All_Traffic.src=10.128.0.0/16 by All_Traffic.src | sort - dest_count | where dest_count > 70 | rename All_Traffic.src AS src_ip | join type=left src_ip [search index=index2 "LOCAL" | head 1 | fields src_ip user ] | table All_Traffic.src dest_count user join returns All_Traffic.src and dest_count without users. | tstats `summariesonly` dc(All_Traffic.dest) as dest_count from datamodel=Network_Traffic.All_Traffic where (All_Traffic.dest="10.*" OR All_Traffic.dest="172.*" OR All_Traffic.dest="192.168.*") AND All_Traffic.src=10.128.0.0/16 by All_Traffic.src | sort - dest_count | where dest_count > 70 | rename All_Traffic.src AS srcip | map search="search index=index2 "LOCAL" src_ip=$srcip$ | head 1 | fields user" | table All_Traffic.src dest_count user map returns users, but no All_Traffic.src and dest_count What is the correct way to get the results I need? Thank you.
Hi Guys I followed the Instructions Here: https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/MonitorWindowsdatawithPowerShellscripts This did not Work because the System User has no rights t... See more...
Hi Guys I followed the Instructions Here: https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/MonitorWindowsdatawithPowerShellscripts This did not Work because the System User has no rights to access the Informations I want to with the Script. So I should run the PS under a different User. How can I configure this? Regards Tobias
Hi team! We would like to upgrade the Splunk platform, but we cannot upgrade the UF. Because of that, we were wondering the followings questions: What would be the minimum universal forwarder com... See more...
Hi team! We would like to upgrade the Splunk platform, but we cannot upgrade the UF. Because of that, we were wondering the followings questions: What would be the minimum universal forwarder compatible with Splunk 7.3.4? According to the compatibility table, the 6.0.x – 6.1.x versions require of an special configuration in the universal forwarder. How is this change performed? And, once it has been performed, will the dispatch still work well before the upgrade? We understand that maybe in those cases, it would make sense to ask for a universal forwarder upgrade. According to the compatibility table, the 6.2.x – 6.6.x versions do not require of an special configuration, but they do not allow the dispatch of metric data. We have been looking for this functionality in the Splunk documentation,- which, by the way, have found to be very interesting -, and it only appears from the version 7 onwards. Could it be that there is no 6.5.x version available? We specially comment it because in this case, we should not have to take into account anything more. Could you help us with these? We would be very much thankful.
Hi All, Not able to Forward syslog data SOURCE=netscaler to ThirdParty throught port 514. Overrided source and trying to forward to third party server but not working. In Transforms format is... See more...
Hi All, Not able to Forward syslog data SOURCE=netscaler to ThirdParty throught port 514. Overrided source and trying to forward to third party server but not working. In Transforms format is two target types(One would be syslog thirdparty server and another is existing indexers) but not wokring. Pls help. Forward syslog data SOURCE=netscaler to ThirdParty throught port 514:- PROPS.CONF [source::/Opt/logs/*/netscaler-*.log] TRANSFORMS-nyc = TO_TP_NET TRANSFORMS.CONF [TO_TP_NET] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = TP_NET_syslog_group,Existing_indexers_Target_Group_OUTPUTS.CONF OUTPUTS.CONF:- [syslog:TP_NET_syslog_group] server = Thirdpartyservername:514
hello I use the stats command below in order to count the number of index on which an host collect events | stats dc(index) AS "Number of index" BY host Now I need to use stats instead ts... See more...
hello I use the stats command below in order to count the number of index on which an host collect events | stats dc(index) AS "Number of index" BY host Now I need to use stats instead tstats So I am doing something like | tstats dc(index) as "Number of index" but when I am doing this I have an error message Error in 'TsidxStats': Aggregations are not supported for index, splunk_server and splunk_server_group" what is the problem please???
I have to show active vpn users at any point of time for e.g. last 15 minutes, last one hour etc.. but these has to be shown based on the user login and logout status, as when I take more time span t... See more...
I have to show active vpn users at any point of time for e.g. last 15 minutes, last one hour etc.. but these has to be shown based on the user login and logout status, as when I take more time span then the count is not matching, as it is counting the status=login even though the user has logged out. How to resolve this issue ? My logs have the field userid, status (login and logout), so I have to calculate accordingly. These are the syslogs from Paloalto vpn. one of the login event : Mar 18 16:11:57 172.x.x.x 1,2020/03/18 16:11:57,013101002125,USERID,login,2049,2020/03/18 16:11:44,vsys3,10.252.110.43,dca\user1,,0,1,10800,0,0,vpn-client,globalprotect,6691206232784508797,0x8000000000000000,14,18,0,0,3,,2020/03/18 16:11:45,1,0x80000000,user1 logout event : Mar 18 16:01:53 172.x.x.x 1,2020/03/18 16:01:53,013101002083,USERID,logout,2049,2020/03/18 16:01:42,vsys3,10.192.114.25,user2,,0,1,0,0,0,vpn-client,globalprotect,6691213783337015900,0x8000000000000000,14,18,0,0,,,3,,2020/03/18 16:01:43,1,0x80000000,user2 I can use below query : index=paloalto sourcetype="pan:log" status=login | stats dc(userid) as login_count | appendcols [search index=paloalto sourcetype="pan:log" status=logout | stats dc(userid) as logout_count] | eval active=login_count-logout_count | table login_count logout_count active Please help ? using the above query the figure is not matching with actual Paloalto active users.
Hi i need to color a column with help of the other. For example color the sourcetype with the help of count. Here is my code. But the coloring is not happening . <html> <style> .css_for_... See more...
Hi i need to color a column with help of the other. For example color the sourcetype with the help of count. Here is my code. But the coloring is not happening . <html> <style> .css_for_green{ background-color:#65A637 !important; } .css_for_yellow{ background-color:#FFFF00 !important; } .css_for_red{ background-color:#FF0000 !important; } .css_for_grey{ background-color:#EEE000 !important; } </style> </html> <table id="tableWithColorBasedOnAnotherField"> <search> <query>index=_internal |stats count by sourcetype |eval sourcetype=sourcetype."|".count</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> Dashbaoard.js require([ 'underscore', 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/tableview', 'splunkjs/mvc/simplexml/ready!' ], function (_, $, mvc, TableView) { var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _(['sourcetype']).contains(cell.field); }, render: function ($td, cell) { var label = cell.value.split("|")[0]; var val = cell.value.split("|")[1]; if (val > 4) { $td.addClass("range-cell").addClass("css_for_green") } else if (val > 5) { $td.addClass("range-cell").addClass("css_for_yellow") } else if (val > 6) { $td.addClass("range-cell").addClass("css_for_red") } $td.text(label).addClass("string"); } }); var sh = mvc.Components.get("tableWithColorBasedOnAnotherField"); if (typeof (sh) != "undefined") { sh.getVisualization(function (tableView) { // Add custom cell renderer and force re-render tableView.table.addCellRenderer(new CustomRangeRenderer()); tableView.table.render(); }); } }); please help me where i am wrong
I have a csv file called ports.csv, this contains one column called "port", this contains all of the port numbers 0-1024. I want to use this csv to filter the dest_port field in my Splunk search. ... See more...
I have a csv file called ports.csv, this contains one column called "port", this contains all of the port numbers 0-1024. I want to use this csv to filter the dest_port field in my Splunk search. So essentially I want to only see events that have which have the destination port 0-1024, how can I do this?. Is there an easier way to do this without a CSV lookupfile? o
The lookup editor appears to be incorrectly converting epoch time. For example, I am working on the ES Malware_Tracker and when I pull that KV store up in the event editor, all epoch times are bein... See more...
The lookup editor appears to be incorrectly converting epoch time. For example, I am working on the ES Malware_Tracker and when I pull that KV store up in the event editor, all epoch times are being converted incorrectly (1970/01/19...). If I use inputlookup and convert the corresponding fields to string, I get expected time ranges. The problem seems identical to answers '730398', "kv-store-time-fields-in-lookup-editor-are-not-show", which was related to bug 4593568 : https://answers.splunk.com/answers/730398/kv-store-time-fields-in-lookup-editor-are-not-show.html The problem is, I have a newer 8.0.1 fresh install with the newest version of the lookup editor (3.3.3)
I'm deploying an on-prem architecture consisting of a deployment server and a number of Heavy Forwarders forwarding data into Splunk Cloud. The on-prem components are only forwarding and not indexing... See more...
I'm deploying an on-prem architecture consisting of a deployment server and a number of Heavy Forwarders forwarding data into Splunk Cloud. The on-prem components are only forwarding and not indexing. I was wondering if it is possible to have all of the on-prem Splunk instances (DS and HFs) act as license slaves to the splunk cloud license master. And how to do that? I tried pointing them to the Splunk cloud license master but the connection times out. There isn't clear documentation on if this is possible. I am aware that since the HFs aren't indexing I could use a forwarder license and for the DS request a 0MB license from splunk. However, connecting them to the Splunk Cloud license master seems more future proof, and allows for expansion and possible changes in the future, including "forward and indexing".
is there any one using this "Network tool kit" app currently? need to know whats the max number of devices configured for PING monitoring with this?
Hi, I am running a small cluster (2*I, 1SH, 1DS) and when I use the monitorig console on the deployment server it happily shows me indexing information for 60 days. But I only see 8 days of CPU/... See more...
Hi, I am running a small cluster (2*I, 1SH, 1DS) and when I use the monitorig console on the deployment server it happily shows me indexing information for 60 days. But I only see 8 days of CPU/memory data for the boxes. What do I need to adjust to get a longer history span for the ressource usage? An while I am at it, also for the performance.... thx afx
Hello Splunkers! I have the following fields being populated by 1000s of values every 1 minute: Name Cost E.g. Luke 1.25 Luke 1.22 Dave 2.45 Dave 2.57 Bearing in mind, there are o... See more...
Hello Splunkers! I have the following fields being populated by 1000s of values every 1 minute: Name Cost E.g. Luke 1.25 Luke 1.22 Dave 2.45 Dave 2.57 Bearing in mind, there are over 1000 Cost values coming in for each Name each minute, I want to identify the biggest movers in terms of Cost over a 5 minute period thereby identifying the most volatile Names in a timechart. Can anyone tell me how I would do this please?
Hello, We’d like to monitor role modifications of our Splunk accounts. The goal is to know who modified what role and which user. Unfortunately, we were not able to find a good query to do that. ... See more...
Hello, We’d like to monitor role modifications of our Splunk accounts. The goal is to know who modified what role and which user. Unfortunately, we were not able to find a good query to do that. index=_audit action=edit_user has no information about type of change and role changed index=_audit action=edit_roles OR action=edit_roles_grantable has no information of user whose role has been changed And we were not able to figure out if | rest services/authorization/roles could be used for this purpose In addition, it looks like both index-based requests returns a lot of system events that pollutes the results. Do you have an idea how the supervision could be set up properly? Thanks for the help.
After the installation of the Palo Alto Networks Add-on for Splunk I'm getting a message saying: Unable to initialize modular input "minemeld_feed" defined in the app "Splunk_TA_paloalto": Introsp... See more...
After the installation of the Palo Alto Networks Add-on for Splunk I'm getting a message saying: Unable to initialize modular input "minemeld_feed" defined in the app "Splunk_TA_paloalto": Introspecting scheme=minemeld_feed: script running failed (exited with code 1) The Add-on is not doing anything in the web interface. I've tried reinstalling it and also installing an older version, but the error stays the same.