All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have enabled SSO authentication for my splunk instance. However, I am still able to login as local user with en-US/account/login?loginType=splunk option. Is there an option to disable this b... See more...
Hi, I have enabled SSO authentication for my splunk instance. However, I am still able to login as local user with en-US/account/login?loginType=splunk option. Is there an option to disable this bypassing to tighten security?
| djquery -database ad -query "select short_description from bu_projectdetails order by [order]" this is the given query........when I am replacing this with lookup files, although result is comin... See more...
| djquery -database ad -query "select short_description from bu_projectdetails order by [order]" this is the given query........when I am replacing this with lookup files, although result is coming in the search but not in the dropdown. | inputlookup bu_projectdetails_lookup.csv |table short_description|sort by [order]
Hi, I need to Optimize my query to improve the dashboard performance without using any type of join function. Below is my query | inputlookup sample.csv | search user IN ( ) application_... See more...
Hi, I need to Optimize my query to improve the dashboard performance without using any type of join function. Below is my query | inputlookup sample.csv | search user IN ( ) application_name IN () "application id" IN (*) |eval None="None" | table "application id",application_name,user,"Status",Type,"Service Host",Platform,Jan,Feb,Mar,Apr,None,env | rename "application_name" as Server_Name | eval Server_Name=upper(Server_Name) | join type=left Server_Name [ search index=idx sourcetype=xyz | eval Server_Name=upper(Server_Name) | search Status!="Completed" | table Server_Name Status] | search Status!="Completed" | stats sum("Jan") as jan sum("Feb") as feb sum("Mar") as mar sum("Apr") as apr by env | eval total = jan+feb + mar + apr |table env total Please help me to optimize this query without using join
[English version below dashes ] Bonjour à tous, Nous utilisons Nessus dans le cadre de contrôles de conformité et souhaitons contrôler SPLUNK, mais aucun fichier d’audit basé sur les CIS Benchmar... See more...
[English version below dashes ] Bonjour à tous, Nous utilisons Nessus dans le cadre de contrôles de conformité et souhaitons contrôler SPLUNK, mais aucun fichier d’audit basé sur les CIS Benchmark n’est fourni par Tenable. Quelqu’un disposerait-il d’un fichier audit forgé d’après les recommandation CIS, ou de ressources permettant de créer le fichier audit en corrélation avec les recommandations CIS ? Vous remerciant par avance. Good morning, everyone, We use Nessus for compliance checks and want to control SPLUNK, but no audit file based on CIS Benchmark is provided by Tenable. Does anyone have an audit file forged based on CIS recommendations, or resources to create the audit file in correlation with CIS recommendations? Thank you in advance.
I have 2 servers in place (linux), and would like to monitor the command hit at putty. Is there any way where we can do this? Also what is the file path were all these logs get stored so that i can... See more...
I have 2 servers in place (linux), and would like to monitor the command hit at putty. Is there any way where we can do this? Also what is the file path were all these logs get stored so that i can monitor it. (I have splunk app and add-on for unix and linux) In simple language, whatever im doing in putty in want to monitor that.
I would like to create a bi-directional bar chart. I have 2 fields monthly closed and monthly created tickets, I want to plot a bar chart with 1 field above X axis and another below x axis. I tired a... See more...
I would like to create a bi-directional bar chart. I have 2 fields monthly closed and monthly created tickets, I want to plot a bar chart with 1 field above X axis and another below x axis. I tired all options available in Chart overlay in UI. Also searched for xml code, but with no success. Can anybody provide some pointers?
Hi, I have a problem with the limited metrics sent to the controller. It's limited to 450 metrics. Tt's very little for supervising multiple VMs. Do you know how to unlock unlimited send metrics w... See more...
Hi, I have a problem with the limited metrics sent to the controller. It's limited to 450 metrics. Tt's very little for supervising multiple VMs. Do you know how to unlock unlimited send metrics without editing the java file? (I can't edit it in my company, no java editor and I am not authorized to download a compiler)? If we use all metrics you can have 17 metrics by servers and 7 by hosts, I have 19 hosts, basically, I only can have 1 or 2 VM to fully supervise by hosts. (if my hosts are fully supervised) in my case, I have only 1 or 2 elements supervised by server/hosts... Log: [Redacted] thx for help bye ^ Post edited by @Ryan.Paredez to remove log file from post. Please do not share or attach log files to community posts.
Hi Everyone! Is there any way to increase the granularity? For example if I want to store 1 minute granurality for 2 days how can I do that? Best, Sandor
Hi, Im having a problem connecting from splunk db connect to MongoDB replica. I can telnet the ip of mongo replica to splunk, also tried to connect the mongo primary db and yielded successfull... See more...
Hi, Im having a problem connecting from splunk db connect to MongoDB replica. I can telnet the ip of mongo replica to splunk, also tried to connect the mongo primary db and yielded successfully. But when connecting to mongo replica db, Im encountering an error "Database connection MongoDB is invalid. not talking to master and retries used up" splunk db connect version is 3.1.4 splunk version is 7.2.3 Can you help me if there is a problem on my configuration below. [mongo] displayName = MongoDB serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = mongodb.jdbc.MongoDriver jdbcUrlFormat = jdbc:mongo://IP:27017/dbname?replicaSet=dbname&readPreference=secondaryPreferred&connectWithNoPrimary=true&slaveOk=true jdbcUrlSSLFormat = jdbc:mongo://IP:27017/dbname?replicaSet=dbname&readPreference=secondaryPreferred&connectWithNoPrimary=true&slaveOk=true ui_default_catalog = dbname port = 27017
link texthi I use the search below which works fine as you can see i count hte number of hosts corresponding to a process_cpu_used_percent scale (0-20, 20-40, 40-60....) but what I need is to ... See more...
link texthi I use the search below which works fine as you can see i count hte number of hosts corresponding to a process_cpu_used_percent scale (0-20, 20-40, 40-60....) but what I need is to have an average of process_cpu_used_percent in order to identify the number of host which are in a average scale of 0-20, 20-40, 40-60... I tried something like this but it doenst works eval(case(avg(process_cpu_used_percent>0 AND process_cpu_used_percent <=20,"0-20", `CPU` | fields process_cpu_used_percent host | eval cpu_range=case(process_cpu_used_percent>0 AND process_cpu_used_percent <=20,"0-20", process_cpu_used_percent>20 AND process_cpu_used_percent <=40,"20-40", process_cpu_used_percent>40 AND process_cpu_used_percent <=60,"40-60", process_cpu_used_percent>60 AND process_cpu_used_percent <=80,"60-80", process_cpu_used_percent>80 AND process_cpu_used_percent <=100,"80-100") | chart dc(host) as "Number" by cpu_range | search cpu_range=$tok_filtercpu$ | append [| makeresults | fields - _time | eval cpu_range="0-20,20-40,40-60,60-80,80-100" | makemv cpu_range delim="," | mvexpand cpu_range | eval "Number"=0] | dedup cpu_range | sort cpu_range | transpose header_field=cpu_range | search column!="_*" | rename column as cpu_range could you help me please??
Hello splunkers, I have report scheduled to run 0 minutes past every hour to generate the tabular results for last 60 minutes and send the email including the link to results. report has successf... See more...
Hello splunkers, I have report scheduled to run 0 minutes past every hour to generate the tabular results for last 60 minutes and send the email including the link to results. report has successfully sent the email (at 00:00,01:00,........,10:00, 11:00, 12:00.....23:00) If i access the link to results at 12:10 in the email that was generated at 10:00, i am able to see the latest results only (i.e., results generated at 12:00, even though i am clicking on the previous link). Can anyone please help me how can i check the results that are generated at that particular time range by clicking the link in respective email.
Hi have a scenario, where I would like to extract the field OfferCode which has space after and before the code: OfferCode : XYZAQERWSD Please help with rex command to extract this field OfferC... See more...
Hi have a scenario, where I would like to extract the field OfferCode which has space after and before the code: OfferCode : XYZAQERWSD Please help with rex command to extract this field OfferCode
I deleted data in clustered index by changing retention policy very short as mentioned below. https://answers.splunk.com/answers/83767/how-do-i-clean-a-clustered-index.html Even after I reverted... See more...
I deleted data in clustered index by changing retention policy very short as mentioned below. https://answers.splunk.com/answers/83767/how-do-i-clean-a-clustered-index.html Even after I reverted the configuration as the original one, Splunk does not load the same data files which once loaded. Is there any way to reload the same files? It does load new files. I tried to create a new index to load but it also failed. Thank you,
We have to configure Akamai logs into Splunk. As per the cloud team's recommendation , use of S3 buckets in mandatory. How can we send akamai logs to S3 buckets ? Once S3 buckets receive the logs,... See more...
We have to configure Akamai logs into Splunk. As per the cloud team's recommendation , use of S3 buckets in mandatory. How can we send akamai logs to S3 buckets ? Once S3 buckets receive the logs, they are pulled by heavy forwarder.
Hello there, Step1: user software_name dc_today dc_past A XYZ.exe 1 9 B PQR.exe 2 3 C DTA.exe ... See more...
Hello there, Step1: user software_name dc_today dc_past A XYZ.exe 1 9 B PQR.exe 2 3 C DTA.exe 0 1 The final result should be: user software_name dc_today A XYZ.exe 1 My method: index=* _index_earliest=-1d |stats dc(user) as dc by software_name |eval dc_today=if(dc=1, 1, 0) |append [search index=* _index_earliest=-5d |stats dc(user) as dc by software_name |eval dc_past=if(dc=1,1,0)] |table user software_name dc_today dc_past So I am running two similar searches with differences in timespan. 1) Append is not reflecting the sub-search 2) Is there more efficient way for this? Thanks in advance! KanJ
We're trying to extract fields that match this [ FIELD_NAME = S0m3 Valu3 w\ reaLLy $pec!aL ch*rac+3rs ] and write them on tsidx so that their consumable on tstats . We're using the transforms-pro... See more...
We're trying to extract fields that match this [ FIELD_NAME = S0m3 Valu3 w\ reaLLy $pec!aL ch*rac+3rs ] and write them on tsidx so that their consumable on tstats . We're using the transforms-props partnership below # transforms.conf [hello_transforms] REGEX = (?<key>[\w]+)\s\=\s(?<value>[^\]]+) FORMAT = $1::$2 REPEAT_MATCH = true WRITE_META = true #props.conf [hello] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TRANSFORMS-capturer = hello_transforms While it is doing what's expected for most of the fields, (i.e. fields are written on disk, verified through walklex), some values failed to be captured entirely or as expected. For example [ REMARKS = A Kerberos authentication ticket (TGT) was requested. ] Splunk only captured "A". See screenshot below. REGEX VALID: Do you think this is Splunk's REGEX engine's fault or I have something wrong in my configs? Thanks in advance.
I have very little experience with splunk, and am on a time crunch, so a bit of patience for my ignorance would be awesome. So today I was setting up an enterprise splunk solution for logs. I set up ... See more...
I have very little experience with splunk, and am on a time crunch, so a bit of patience for my ignorance would be awesome. So today I was setting up an enterprise splunk solution for logs. I set up the universal forwarders on a few devices, and set up my indexer on a centos server. I set the receiving port (the default of 9997), set up a new index to sort my data out, and added from the indexer section, which seemed to work, except that I don't actually see any logs. When I get into those operating systems and run a list forward-server command (on linux) it comes back with inactive: ipaddress:port. I tried to see if there was something wrong with my firewall, but everything seems to be open for the 9997 port, I can ping back and forth between systems, I checked my outputs.conf file to make sure that there was the right server address there, and my inputs.conf seem right. I'm beyond clueless after reading all kinds of forums. I also am having a bit of an issue with space on the system. Splunk tells me that my disk space is at the minimum under opt/splunk8 to deployment, but I don't know what is taking that space. Maybe it's the logs that were sent but never indexed? Where would those end up? (I made the mistake of not setting an index for the monitors that I set up earlier.) Any help is appreciated, and again, I don't know a whole lot about splunk, so I'm just trying to get it to work... I had plans on integrating splunk into splunk phantom, but that's not happening until splunk works lol. Thanks!
Rule Name : Abnormally High Number of Endpoint Changes By User Description: Detects an abnormally high number of endpoint changes by user account, as they relate to restarts, audits, filesystem, u... See more...
Rule Name : Abnormally High Number of Endpoint Changes By User Description: Detects an abnormally high number of endpoint changes by user account, as they relate to restarts, audits, filesystem, user, and registry modifications. | tstats count from datamodel=Endpoint.Filesystem where Filesystem.tag="change" by Filesystem.user | eval change_type="filesystem",user='Filesystem.user' | tstats append=T count from datamodel=Endpoint.Registry where Registry.tag="change" by Registry.user | eval change_type=if(isnull(change_type),"registry",change_type),user=if(isnull(user),'Registry.user',user) | tstats append=T count from datamodel=Change.All_Changes where nodename="All_Changes.Endpoint_Changes" by All_Changes.change_type,All_Changes.user | eval change_type=if(isnull(change_type),'All_Changes.change_type',change_type),user=if(isnull(user),'All_Changes.user',user) | stats count as change_count by change_type,user | xswhere change_count from change_count_by_user_by_change_type_1d in change_analysis by change_type is above medium
I've my cisco devices sending the logs to kiwi syslog server and writing them to several files, (ise,switches,asa...). I need to monitor and ingest the data. Which servers do I setup the monitoring o... See more...
I've my cisco devices sending the logs to kiwi syslog server and writing them to several files, (ise,switches,asa...). I need to monitor and ingest the data. Which servers do I setup the monitoring on? I've 1 search,deployment,indexer,HF each.
Hi folks, I have been trying to test Splunk DB connect on my local machine before deploying it live. I've installed a mysql instance, so I can test the connection. So trying to set up my firs... See more...
Hi folks, I have been trying to test Splunk DB connect on my local machine before deploying it live. I've installed a mysql instance, so I can test the connection. So trying to set up my first connection as a test. But I always get this error message: The server time zone value 'Est' is unrecognized or represents more than one time zone. You must configure either the server or JDBC driver (via the 'serverTimezone' configuration property) to use a more specifc time zone value if you want to utilize time zone support. I'm using splunk 8.0.1, java 8, mysql 8, splunk db connect 3.