All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

considering we can move data from one index to another index in the same cluster by moving buckets, I am in a scenario where to create an index on index cluster 2 with a new name with an increased re... See more...
considering we can move data from one index to another index in the same cluster by moving buckets, I am in a scenario where to create an index on index cluster 2 with a new name with an increased retention time period, how do I move the data of one index which is in say cluster 1 with replication factor 3 i.e., the indexer has a copy of every bucket in other two indexers under different names, which can not be copied to the new index with a different name in cluster 2. Here the challenge is to identify the replicated buckets to avoid being copied to a new index on another cluster, so we only copy the primary buckets to cluster 2 and allow cluster  2 to create the replication buckets based on its own rep factor.   was this achievable? either UI or CLI? how? If I only want to copy data of specific sourcetype from index 1 from cluster 1 to index 2 in cluster 2 how can I do that? NOTE: I can not create the index 2 with same name as index1, it's been created and under use for other data.   How can I acive this?  
Hi, How to create automatic tag if: eventtypes.conf [duo_authentication] search = sourcetype=json:duo type=authentication tags.conf [eventtype=duo_authentication] authentication = enabled I... See more...
Hi, How to create automatic tag if: eventtypes.conf [duo_authentication] search = sourcetype=json:duo type=authentication tags.conf [eventtype=duo_authentication] authentication = enabled I also add to admin, user, power roles "default: index=index_of_duo". But, simply it want add tag (dont understand why if above eventtype search is working)
I don't want to send an alert because I want the benefits of a report (all results in one file as opposed to sending an alert for each hit on the search), so I'm trying to figure out how to send a re... See more...
I don't want to send an alert because I want the benefits of a report (all results in one file as opposed to sending an alert for each hit on the search), so I'm trying to figure out how to send a report but only if it has results. If it has zero results, I don't want it to send. 
Hi Is there anyway to find transaction flow like this i have log file contain 50 million transactions like this 16:30:53:002 moduleA:[C1]L[143]F[10]ID[123456] 16:30:54:002 moduleA:[C2]L[143]F[20]I... See more...
Hi Is there anyway to find transaction flow like this i have log file contain 50 million transactions like this 16:30:53:002 moduleA:[C1]L[143]F[10]ID[123456] 16:30:54:002 moduleA:[C2]L[143]F[20]ID[123456] 16:30:55:002 moduleB:[C5]L[143]F[02]ID[123456] 16:30:56:002 moduleC:[C12]L[143]F[30]ID[123456] 16:30:57:002 moduleD:[C5]L[143]F[7]ID[123456] 16:30:58:002 moduleE:[C1]L[143]F[10]ID[123456] 16:30:59:002 moduleF:[C1]L[143]F[11]ID[123456] 16:30:60:002 moduleZ:[C1]L[143]F[11]ID[123456]   need to find module flow for each transaction and find rare flow.   challenges: 1- there is no specific “key value” exist on all lines belong that transaction.  2-only key value that exist on all line is ID[123456].  3-ID might be duplicated and might belong several transactions. 4-module name not have specific name and there are lots of module names. 5-ID not fixed position (end of each line) any idea? Thanks
Hello @kamlesh_vaghela, This is with regards to your solution posted on the below thread: - https://community.splunk.com/t5/Splunk-Search/How-to-apply-the-predict-function-for-the-most-varying-fiel... See more...
Hello @kamlesh_vaghela, This is with regards to your solution posted on the below thread: - https://community.splunk.com/t5/Splunk-Search/How-to-apply-the-predict-function-for-the-most-varying-field/m-p/422163 I have relatively similar use case, I have multiple columns, the first column is of _time and the remaining column fields are distinct having numeric data for each timestamp. I need to compute the forecast value using the predict command. I tried to use your approach of looping through fields using foreach and then passing it to predict command. However, it takes only one field and its values and computes the forecast value. I need to calculate the same for all the fields returned by the timechart command. Thus, it would be very helpful to seek your inputs on the same. Thank you Taruchit
Hello, by default, DMA summaries are not replicated between nodes in indexer cluster (for warm and cold buckets). I wonder how command tstats with summariesonly=true behaves in case of failing one n... See more...
Hello, by default, DMA summaries are not replicated between nodes in indexer cluster (for warm and cold buckets). I wonder how command tstats with summariesonly=true behaves in case of failing one node in cluster. Imagine, I have 3-nodes, single-site IDX cluster in deafult setting. What happened, when one node fails (so summaries on that node are not available) and I run search using "|tstats summariesonly=true..." on this cluster? If search spans data from primary warm or cold buckets on failed node, will I get incomplete data, right? (I think so, because appropriate summaries are missing). And if so, will I get any error message on search page? And how it change in case of multi-site cluster? I assume in case of failing one node, I should get complete data, becuase AFAIK in multi-site cluster every site has primary copy of bucket with DMA summaries. Is it right or not? I need this info because of one project I am working on. Thank you for answers. Best regards Lukas Mecir
Hi Splunkers,    1) I wanted to remove all special characters from my field called "Test" other than "."(dot) and "-"(dash) 2) return the values in lower case. example field values for Test i4... See more...
Hi Splunkers,    1) I wanted to remove all special characters from my field called "Test" other than "."(dot) and "-"(dash) 2) return the values in lower case. example field values for Test i4455.mango.com qa (qa_a_ai_bi1_integration_d01) app-9999-bee-mysql-prod   please help
Hi all, I am using Splunk Enterprise Security and having trouble converting the indexes to CIM compliance. One of them is Cloudflare. The JSON data is being ingested via an AWS S3 bucket, and the v... See more...
Hi all, I am using Splunk Enterprise Security and having trouble converting the indexes to CIM compliance. One of them is Cloudflare. The JSON data is being ingested via an AWS S3 bucket, and the visualization works fine on the Cloudflare App. However, the CIM Validator doesn't recognize the events and is unable to be used in Splunk ES. Has anyone been able to successfully convert these events to be CIM compliant? Thanks,
Hello I understand that you access the monitoring web through the launch controller button on the account page. I received the license today and proceeded with the installation, but two errors oc... See more...
Hello I understand that you access the monitoring web through the launch controller button on the account page. I received the license today and proceeded with the installation, but two errors occur as follows. 1. <h1>500 Internal Server Error</h1><br/>Exception <br/> 2. : HttpErrorResponse <html><body><h1>500 Internal Server Error</h1><br/>Exception <br/></body></html> Http failure response for https://chaplinappdynamics.com/controller/restui/containerApp/mainNavConfig: 500 Internal Server Error I didn't click "Use local login", I clicked "Next". Can you tell me what the problem is? Thank you.
I have a field called position that contains integers and a token called position_select that is either a floating point number or a * (=all positions). Now i want to search all positions that match... See more...
I have a field called position that contains integers and a token called position_select that is either a floating point number or a * (=all positions). Now i want to search all positions that match position_select. So i tried something like that: index = index1 | eval position_search = floor($position_select$) | where position = position_search The problem is that you of course can't use * in floor. Another problem is that | where position = * is impossible too. However i cant use | search because | search position = position_search  does not work.   So the question is, is there any way to use something like floor()  on position_select?  
Hi, We receive daily emails with lists of IOC's for malware and phishing alerts, each email may contain multiple ip address, domains and email addresses and we are trying to extract these to run se... See more...
Hi, We receive daily emails with lists of IOC's for malware and phishing alerts, each email may contain multiple ip address, domains and email addresses and we are trying to extract these to run searches against out web and email logs.  I have the regex working for extraction but it will only extract the first match. I've tried multiple ways of achieving this without success, the current config is: Props.conf EXTRACT-IOCURL = (?P<IOCURL>[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9][\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9][\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9]+[\[][\.|@][\]][^\s]{2,}|[a-zA-Z0-9]+[\[][\.|@][\]][^\s]{2,}) EXTRACT-IOCIP = (?P<IOCIP>\d{1,3}\[\.\]\d{1,3}\[\.\]\d{1,3}\[\.\]\d{1,3}+) The indexed email looks like this.... .... Domains comprised[.]site badsite[.]studio malware[.]live IP addresses 192[.]254[.]71[.]78 192[.]71[.]27[.]202 193[.]182[.]144[.]67  ....   but the current config will only extract the first record for each: IOCURL - comprised[.]site and  IOCIP  - 192[.]254[.]71[.]78. Any ideas how to extract all the domains and IP addresses? Thanks 
Hello community We installed the CIM app in our SH cluster. The installation seemed to work as expected, all defaults no modifications. Afterwards when trying to launch the app we landed on a "ta_ni... See more...
Hello community We installed the CIM app in our SH cluster. The installation seemed to work as expected, all defaults no modifications. Afterwards when trying to launch the app we landed on a "ta_nix_configuration" page, trying to access the "cim_setup" page we got a 404. We removed the app, rolled all members and re-installed. Once again it all seemed to work just fine. This time around we can access the cim_setup page, though if we try to access the "<baseurl>/app/Splunk_SA_CIM/" directly or use the "launch app" link in the GUI we land on the "<baseurl>/app/Splunk_SA_CIM/ta_nix_configuration" site. Is this somehow the expected behaviour or have we got some crossed wires somewhere?
I am having two counts in the dashboard one is the total count and other is error count to get the success count I want the difference. How can we do that. index=US_WHCRM_int   (sourcetype="bmw-crm-... See more...
I am having two counts in the dashboard one is the total count and other is error count to get the success count I want the difference. How can we do that. index=US_WHCRM_int   (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=INFO ("*Element*: bmw-cm-wh-xl-cms-contractWithCustomers-flow/processors/2/processors/0 @ bmw-crm-wh-xl-cms-int-api:bmw-crm-wh-xl-cms-api-impl/bmw-cm-wh-xl-cms-contractWithCustomers*") OR "*flow started put*contractWithCustomers" OR "*flow started put*customers:application*" OR "ERROR Message" OR "flow started put*contracts:application*") OR (sourcetype="bmw-crm-wh-xl-cms-int-api" severity=ERROR "Error Message") | rex field=message "(?<json_ext>\{[\w\W]*\})" | rex field=message "put:\\\\(?<Entity>[^:]+)" | rename attributes{}.value.details as details | rename properties.correlationId as correlationId | table _time properties.* message json_ext details Entity | spath input=json_ext | stats count by Entity Using | stats count by Entity and | stats count by title  I am getting two counts how can I find the difference between the Entity and title count
I have a Splunk Enterprise Cluster that doesn't get new data ingested anymore. But the existing indexes should remain searchable for a while still. Since search usage is only sporadic I'd like to sav... See more...
I have a Splunk Enterprise Cluster that doesn't get new data ingested anymore. But the existing indexes should remain searchable for a while still. Since search usage is only sporadic I'd like to save on infrastructure cost and hibernate the whole cluster. Only bring it up again when someone needs to search the old data, and hibernate the cluster again.   How would I do this best? My environment consists of search head cluster with 2 members and an indexer cluster with 6 members. My understanding is, as soons as I start to stop indexers, the cluster would try to rebalance the data in the remaining indexer nodes. That seems suboptimal since I need to stop all the instances eventually and don't want to end up with a single indexer node holding all the data.   Any ideas?  
Hi all, I have a csv file which contains the some of the counts. I have a set of groups like a,b,c,d which are the values of multiselect dropdown and it is dynamic and the columns in the csv is new_... See more...
Hi all, I have a csv file which contains the some of the counts. I have a set of groups like a,b,c,d which are the values of multiselect dropdown and it is dynamic and the columns in the csv is new_a_Total,new_a_added,new_a_removed,new_b_Total,new_b_added,new_b_removed,new_c_Total,new_c_added,new_c_removed,new_b_Total,new_b_added,new_b_removed. When i select more than one value from the multiselect dropdown i want to add all the respective total, added and removed of the selected groups to show it in the timechart. For example if a and b is selected, it should add new_a_Total and new_b_Total will be renamed as Total, new_a_added and new_b_added will be renamed as added, like this all the respective datas will be added and shown as single result. How can i achieve this? Currently i am trying it out using for each. Any suggestions would be really helpful.
Dear All, Please suggest how to create separate incident review dashboard for different team. OR How the notable will separated base on Teams.  i.e. Windows Team - Windows Team can only check wind... See more...
Dear All, Please suggest how to create separate incident review dashboard for different team. OR How the notable will separated base on Teams.  i.e. Windows Team - Windows Team can only check windows related notable  Unix Team -Linux Team can only check Unix related notable  SOC Team - Soc Team can check all the notable 
Hello community, I'm encountering a problem that's probably simple to correct, but no matter how hard I try, I can't do it. I have a query that returns several results that I count according to the ... See more...
Hello community, I'm encountering a problem that's probably simple to correct, but no matter how hard I try, I can't do it. I have a query that returns several results that I count according to the time range. This allows me to provide a graph showing the hourly load. However, I noticed that when there was no result over a time range (for example between 3:00 a.m. and 4:00 a.m.), the graph does not appear in full, I am missing the time range in question : Here is my current query: index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | eval Heure = strftime(_time, "%H") | stats count by Heure | rename count AS Events | sort Heure I tried to force the appearance of a "0" value if there was nothing but that didn't change: index="oncall_hp" currentPhase=UNACKED routingKey=*event* entityDisplayName!=*Local-Self-Monitoring* | dedup incidentNumber | eval Heure = strftime(_time, "%H") | stats count by Heure | rename count AS Events | eval Events=if(isnull(Events) OR len(Events)==0, "0", Events) | sort Heure   I looked on the forum to see if other people had had this problem but I couldn't find it (or I didn't look well). Do you have an idea to simply add a "0" value if a time slot is empty, and that adds it to the graph? Best regards, Rajaion
Hello    The Splunkd Services are not working after starting/restarting the services and it is getting stopped, I have tried several times. So, could you please help me to sort it out from this iss... See more...
Hello    The Splunkd Services are not working after starting/restarting the services and it is getting stopped, I have tried several times. So, could you please help me to sort it out from this issue. Thanks in advance.
Hi all, I have a case about monitoring Linux servers. Here what i am trying to do. I am not sure this is possible or not but i have to do these things with possibilities because System Staff wanted ... See more...
Hi all, I have a case about monitoring Linux servers. Here what i am trying to do. I am not sure this is possible or not but i have to do these things with possibilities because System Staff wanted these from me. 1-Root SSH access enabled servers --> Need Help 2-When someone changed sudoers file --> Done. 3-Root password change --> Done. 4-Users who have "0" ID except root --> Need Help   I did some steps but i need help about 2 step. Any help would be appreciated!
hi all, is there a way to demote a case to a container using a playbook?   thank you in advance