All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to use DBSCAN fit and getting  error "DBScan Error in fit command: Memory limit exceeded" . I did increased Memory limit from 1000 mb to 3000 mb and still getting the error at 50k records... See more...
I am trying to use DBSCAN fit and getting  error "DBScan Error in fit command: Memory limit exceeded" . I did increased Memory limit from 1000 mb to 3000 mb and still getting the error at 50k records. I need to process of 500k records. Is there any work around for this situation?
I need to extract the contents of the message field into a json log, but the first strings must be ignored until 'stdout F', I can only get the one in front, the second timestamp Any ideas how to do... See more...
I need to extract the contents of the message field into a json log, but the first strings must be ignored until 'stdout F', I can only get the one in front, the second timestamp Any ideas how to do this? Examples:   { app: app01 message: 2022-01-06T17:57:25.799919642Z stdout F [2022-01-06 09:00:00,799] INFO - INFO region: southamerica-east1 } { app: app02 message: 2022-01-06T17:57:25.799919642Z stdout F [2022-01-06 10:20:25,799] ERROR - APIAuthenticationHandler API authentication failure region: southamerica-east1 } { app: app03 message: 2022-01-06T17:57:25.799919642Z stdout F [2022-01-06 12:57:00,799] WARN - failure due to Invalid Credentials region: southamerica-east1 } { app: app04 message: 2022-01-06T17:57:25.799919642Z stdout F [2022-01-06 14:57:25,799] WARN - APIAuthenticationHandler API authentication region: southamerica-east1 }  
If we move the folder "Splunlpkg.backup" from the indexer server to another data mount (i.e., from /dev/sda3/  to /dev/other), what could be the consequences? Will there be any data loss? 
Hello splunk community! For some context, I started by adding some files into a directory first, then i configured the monitor processor of the splunk universal forwarder in inputs.conf  to monitor ... See more...
Hello splunk community! For some context, I started by adding some files into a directory first, then i configured the monitor processor of the splunk universal forwarder in inputs.conf  to monitor the directory. However, (after restarting splunk universal forwarder) when i searched for the index in splunk enterprise, there were no search results. Afterwards, i added some new files to the directory and suddenly the logs appeared on the search and what confuses me is that it does contain logs from the existing files in the directory. is anyone able to explain why the logs of existing files didnt appear in the first place? and only appeared after i added new files to the directory? Thank you in advance!
I have a few apps that contain repots that I need to copy to ES please. Thank u
Hello, We are sizing a Splunk solution for internal usage. Referring to the documentation, it is said that Mid size Indexer will require 48vCPU and 64Gb RAM. However, I wanted to understand how much... See more...
Hello, We are sizing a Splunk solution for internal usage. Referring to the documentation, it is said that Mid size Indexer will require 48vCPU and 64Gb RAM. However, I wanted to understand how much EPS will this kind of indexer handle. Please advise
How do I plot a per-operation success rate over a rolling 24 hour period?  As a point in time query producing a chart, I do index=kubernetes source=*proxy* api.foo.com OR info OR commitLatest ... See more...
How do I plot a per-operation success rate over a rolling 24 hour period?  As a point in time query producing a chart, I do index=kubernetes source=*proxy* api.foo.com OR info OR commitLatest | rex field=_raw ".*\"(POST|GET) \"(?<host>[^\"]+)\" \"(?<path>[^\"\?]+)[\?]?\" [^\"]+\" (?<raw_status>\d+) (?<details>[^\ ]+) " | eval status=case(details="downstream_remote_disconnect","client disconnect",match(details, "upstream_reset_after_response_started"),"streaming error",true(),raw_status) | eval operation=case(match(path,".*contents"),"put-chunked-file",match(path,".*info"), "get-file-info-internal", match(path,".*commitlatest"), "commit-latest-internal", true(), "get-chunked-file") | eval failure=if(match(status,"^(client disconnect|streaming error|[0-9]|400|50[0-9])$"),1,0) | stats count by operation, failure | eventstats sum(count) as total by operation | eval percent=100 * count/total | stats list(*) by operation | table operation, list(failure), list(percent), list(count)
<query> index=index_test | dedup empID | eval tot = case (match('call.code' , "1") OR match('call.code' , "2") OR match('call.code' , "3") OR match('call.code' , "4") OR match('call.code' , "5") ,... See more...
<query> index=index_test | dedup empID | eval tot = case (match('call.code' , "1") OR match('call.code' , "2") OR match('call.code' , "3") OR match('call.code' , "4") OR match('call.code' , "5") , "Success", match('call.code' , "6"),"Failure") | stats count(eval(tot="Success")) as "TotalSuccess" count(eval(tot="Failure")) as "TotalFailure" | rename TotalSuccess as SUCCESS, TotalFailure as FAILURE </query> In the Drilldown Part:- <drilldown> <set token="abc">$click.value$</set> <set token="xyz">case ($click.name2$="FAILURE", "6",  $click.name2$="SUCCESS", "1,2,3,4,5" ) <link target="_blank"> search?q=index=index_test call.operation IN "$abc$" call.code IN "click.name2" | dedup empID | eval tot = case (match('call.code' , "1") OR match('call.code' , "2") OR match('call.code' , "3") OR match('call.code' , "4") OR match('call.code' , "5") , "Success", match('call.code' , "6"),"Failure") </link> </drilldown> Here in drilldown, I want to pass multiple values in  $click.name2$="SUCCESS", "1,2,3,4,5". But it is not taking the values.
Hello, I have a SC4S server setup receiving info from our Network UPS.  I have created a new index for any date to do with our UPS in Splunk.  I went into the SC4S server and modified the compliance... See more...
Hello, I have a SC4S server setup receiving info from our Network UPS.  I have created a new index for any date to do with our UPS in Splunk.  I went into the SC4S server and modified the compliance_meta_by_source.conf and compliance_meta_by_source.csv files.   When I add the entry for the new index the info stops coming to our Splunk environment.   If I remove it it starts coming over again.   What am I doing wrong.  If I leave the .splunk.index portion out the info goes over with the new source type and also creates the new field in splunk, but as soon as I add the index part the info stops going over.  Below is the info that I have put in both files. compliance_meta_by_source.conf filter f_powerware_ups { host("my ups IP address" type(glob)) }; compliance_meta_by_source.csv f_powerware_ups,.splunk.sourcetype,"powerware_ups" f_powerware_ups,fields.vendor,"Eaton" f_powerware_ups,.splunk.index,"netups"  
Hi all, I have following situation. We had an indexer cluster with 4 peers were is currently still enough storage on our SSD's, so the home, cold and thawed path is for all the same(/data/<index_... See more...
Hi all, I have following situation. We had an indexer cluster with 4 peers were is currently still enough storage on our SSD's, so the home, cold and thawed path is for all the same(/data/<index_name>/(colddb|db|thaweddb)). Now we will extend the storage with HDD and plan to migrate the cold and thawed path for all indexes to a different storage location(/archive/<index_name>/(colddb|thaweddb)). Now is the question how should it work ? I want to minimize the downtime so i would prepare the new locations on all 4 indexer peers and would already do a copy of all buckets in cold and thaweddb to the new location. Now the question can i reduce the bucket roll activity with starting maintenance mode?! So i would activate the maintenance mode make again a copy to get all bucket files on the same state, now i would adjust the indexes.conf and initiate a rolling restart afterwards i would disable the maintenance mode. But does it work to make a rolling restart when maintenance mode is active? Or do i only have to copy the files to new location change indexes.conf and restart, but what is if a bucket roll take place from warm to cold, can i copy the files again from old to new directory and restart again, because i do not want to loose any data. please give me an advice because i did not find any information in the documentation when and how to restart the indexer cluster in such case. kind regards Kathrin
After we upgraded from 8.0.7 to 8.2.3, we are having lots of problems with search performance.  We noticed that the analytics workspace changed a great deal and we wonder if that could be causing the... See more...
After we upgraded from 8.0.7 to 8.2.3, we are having lots of problems with search performance.  We noticed that the analytics workspace changed a great deal and we wonder if that could be causing the performance issue.   Now we have lots of searches queued - that didn't happen before.  Also sometimes the maximum number of historical searches is exceeded and we end up having to restart Splunk.  After the restart, our system runs okay for a while. Any help will be appreciated.
Hello all, I am trying to integrate trendmicro iwsva logs to splunk. It is showing as a supported device but i am unable to find any TA . Can someone suggest how to procees. Currently I have DDA/DDI ... See more...
Hello all, I am trying to integrate trendmicro iwsva logs to splunk. It is showing as a supported device but i am unable to find any TA . Can someone suggest how to procees. Currently I have DDA/DDI logs being ingested for which the logs are properly parsed.I am using the same sourcetype for iwsva logs aswell i.e. cefevents   Can someone please suggest what should I use here. Thanks in advance for the help
I have a table (that is a spitted URL) in the following format:   field1 field2 field3 field4 field5 field6 aaaaa 11111 qqqqq aaaaaa tttttt yyyyyy aaaaa 11111 cccccc rrrrrrr ... See more...
I have a table (that is a spitted URL) in the following format:   field1 field2 field3 field4 field5 field6 aaaaa 11111 qqqqq aaaaaa tttttt yyyyyy aaaaa 11111 cccccc rrrrrrr     bbbbb 22222 rrrrrrrrr iiiiiiiiiii vvvvvv   ccccc 22222 wwwww ttttttttt     ddddd 33333 444444 5555555       Ans the other table has only some of the columns: field1 field2 field3 field4 Name Description ccccc 22222     Mickey Mouse aaaaa 11111     Pinky Brain ddddd 33333 444444   ZZ Top   I need that the rows in the second table with be marched to first one, when the values of the second table have only a "base" values. This is what I expect to get: field1 field2 field3 field4 field5 field6 Name Description aaaaa 11111 qqqqq aaaaaa tttttt yyyyyy Pinky Brain aaaaa 11111 cccccc rrrrrrr     Pinky Brain aaaaa 11111 qqqqq aaaaaa tttttt yyyyyy Pinky Brain aaaaa 11111 cccccc rrrrrrr     Pinky Brain bbbbb 22222 rrrrrrrrr iiiiiiiiiii vvvvvv   ZZ Top ccccc 22222 wwwww ttttttttt     Mickey Mouse ddddd 33333 444444 5555555     ZZ Top   | join type=left field1 field2... [] It make sense that when I do a left join it looks for a corresponding values in all fields, and if it's not there... I have no results How can I solve it? Thanks
Hi Team, I am stuck with a query that is not working. I have set up a summary index that collects data every 1 hour and every 15min. I have a field 'isCostChanged' which I want to count basis 'Yes... See more...
Hi Team, I am stuck with a query that is not working. I have set up a summary index that collects data every 1 hour and every 15min. I have a field 'isCostChanged' which I want to count basis 'Yes' and 'No' in Summary Index. Using this query : index=summary-my-sumdata splunk_server_group=default reporttype=costchangecount reporttime=fifteenmin isCostChanged=* | stats sum(count) as Total, sum(eval(isCostChanged="true")) as CostChanged, sum(eval(isCostChanged="false")) as NoCostChanged by CountryCode | eval CostChangeRatio=round((CostChanged/Total)*100,2) | eval NoCostChangeRatio=round((NoCostChanged/Total)*100,2) | fields CountryCode, NoCostChanged, CostChanged, CostChangeRatio   What its doing - Total count is correct but the count for isCostChanged=true and =false is not correct, the count is less if I do this below to verify the data, the count is correct | stats sum(count) as Total by isCostChanged Can you help how to achieve this  Thanks in advance  Nishant
Hi,  Wondering if anyone can help.  I am trying to create a new field called FS_Owner_Mail using |eval from both the mail and FS_Owner existing fields but not too sure how to work it into the below... See more...
Hi,  Wondering if anyone can help.  I am trying to create a new field called FS_Owner_Mail using |eval from both the mail and FS_Owner existing fields but not too sure how to work it into the below search.   index=varonis sourcetype=xxx:varonis:csv:reports | eval User_Group=replace(replace('User_Group',"xxxxl\\\\","")," ","") | join type=left User_Group [ search index=ad source=xxx_adgroupmemberscan memberSamAccountName="*_xxx" earliest=-48h | dedup groupSamAccountName, memberSamAccountName | rename groupSamAccountName as User_Group, memberSamAccountName as Member | join type=left Member [ search index=ad source="xxx_aduserscan" samAccountName="*_xxx" | dedup samAccountName | rename samAccountName as Member | table Member, displayName, mail] | stats values(Member) as Member, values(displayName) as DisplayName, values(mail) as Mail by User_Group | eval User_Group=replace(replace('User_Group',"_xxx","")," ","")] | table Access_Path Current_Permissions, DisplayName, FS_Owner, Flags, Inherited_From_Folders, Mail, Member, User_Group
Hello   We want to connect one Kafka machine with Splunk connect for Kafka but it's throwing the below error in file: config/connect-distributed.properties.     [Worker clientId=connect-1, ... See more...
Hello   We want to connect one Kafka machine with Splunk connect for Kafka but it's throwing the below error in file: config/connect-distributed.properties.     [Worker clientId=connect-1, groupId=MySuperSystemID_group] Uncaught exception in herder work thread, exiting: (org.apache.kafka.connect.runtime.distributed.DistributedHerder:324) org.apache.kafka.connect.errors.ConnectException: Could not look up partition metadata for offset backing store topic in allotted period. This could indicate a connectivity issue, unavailable topic partitions, or if this is your first use of the topic it may have taken too long to create. at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:184) at org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:145) at org.apache.kafka.connect.runtime.Worker.start(Worker.java:197) at org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:128) at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)   The configuration looks like this: group.id=MySuperSystemID_group bootstrap.servers=https://x.x.x.x:9093 #key.converter=org.apache.kafka.connect.storage.StringConverter #value.converter=org.apache.kafka.connect.storage.StringConverter # SSL Configuration security.protocol=SSL ssl.client.auth=required # enable two way SSL ssl.endpoint.identification.algorithm = ssl.key.password=password ssl.keystore.location=/root/files/keystore.jks ssl.keystore.password=password ssl.keystore.type = JKS ssl.truststore.location=/root/files/truststore.jks ssl.truststore.password=password ssl.enabled.protocols=TLSv1.2,TLSv1.1 ssl.truststore.type=JKS   Can someone please guide me over here?
hi, I have a dashboard which generate me the count of search results into a token -  <done><set token="count_results">$job.resultCount$</set></done> I use that token inside a title of a panel -  ... See more...
hi, I have a dashboard which generate me the count of search results into a token -  <done><set token="count_results">$job.resultCount$</set></done> I use that token inside a title of a panel -  <title>$count_results$ Accounts</title>  once I schedule the dashboard as PDF - the title of that panel shows: $count_results$ Accounts Do you have any workaround for that requirement? to show count of results inside a title of scheduled PDF dashboard?   thanks.
Hello All,  1) I would like to add radio button / any way to select - one of the results of my below REST query search, QUERY :    |rest /services/data/ui/views | table id label updated "eai:userNa... See more...
Hello All,  1) I would like to add radio button / any way to select - one of the results of my below REST query search, QUERY :    |rest /services/data/ui/views | table id label updated "eai:userName" "eai:data" "eai:appName" 2) This Query search is saved as a dashboard (auto-refresh) and I have added few text boxes (User Name, Commit Branch, User Token) as show in the attached image. These text boxes will be manually filled by user.     Use Case: I need to choose any one row via radio button (or any other technical way) and then click on the SUBMIT button to send the selected row data and text box (manually entered by user) data to my custom python script.  What is the way to achieve this use case in Splunk, Any help on this is appreciated.  Thanks!
Hello All, I have to load balance the https requests over indexer cluster.  Need to know the best approach to load balance the data. Is NGNIX is only solution?