All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

| rex field=DATA "\S(?<DATE>.{10})(?<WORKLOAD>.{3})\S.{137}(?<CPU>.{7}).*" | where WORKLOAD in("F91","F92","FA1","FA2","FA3","FB2","FC4","FC5","FC6","FH1","FH2","FH3","FH4","FNC","FSC") | eval CPU_... See more...
| rex field=DATA "\S(?<DATE>.{10})(?<WORKLOAD>.{3})\S.{137}(?<CPU>.{7}).*" | where WORKLOAD in("F91","F92","FA1","FA2","FA3","FB2","FC4","FC5","FC6","FH1","FH2","FH3","FH4","FNC","FSC") | eval CPU_TIME=replace(CPU,",","") | convert num(CPU_TIME) as CPUTIME | stats sum(CPUTIME) as CPU_TIME_SEC by WORKLOAD
I want to create an alert for one particular error. So what would be the exact spl i need to write?  Error is not in the intersting field.So i used this one. I did from my end : index=os  source="... See more...
I want to create an alert for one particular error. So what would be the exact spl i need to write?  Error is not in the intersting field.So i used this one. I did from my end : index=os  source="/var/log/messages" | eval  new_error= "server is not responding" Is the above search correct? If not then please provide me the correct one.    
Is it possible to use DBConnect to pull logs from an application's internal database? The situation we have here is that one of our MDM applications (Informatica) logs all of their audit trail record... See more...
Is it possible to use DBConnect to pull logs from an application's internal database? The situation we have here is that one of our MDM applications (Informatica) logs all of their audit trail records into a relational database. This isn't like an Oracle database that can be queried via SQL - it's internal to Informatica and can only be accessed via the Informatica UI. Informatica is installed on-prem, so we aren't using a cloud-based version. With that said, how would we go about pulling the audit logs into Splunk? Would DBConnect have a way of connecting to this internal database, or would another method need to be used here?
We have a search that runs that generate a large number of results; for each result we need to take an alert action (individually).  While I've increased the maxtime from the default 5min to 3h hours... See more...
We have a search that runs that generate a large number of results; for each result we need to take an alert action (individually).  While I've increased the maxtime from the default 5min to 3h hours, looking at tracing logs from the alert action, it stops running after 5 minutes despite only having processed a fraction of the search results. For the claim its only processed a fraction of the results: I determined the number of search results by going to the saved search, clicking on VIew Results, and click the results set from appropriate  I determined the number of results processed by looking for a log message in the custom alert action that is generated at the top of the process_event function -- the Splunk add-on builder was used to build the custom alert action. To increase the maxtime, I initially set it just for this alert action; the search head is dedicated to running alert actions, so I then increased it globally just in case it would matter.  After both changes, I validate the setting with btool, and then restarted the Splunk instance.   Edit: It looks like when I cloned the search so I wasn't modifying the production copy, it added more fields in the savedsearches.conf including the following setting:       action.<redacted custom alert action name>.maxtime = 5m       I increased that setting assuming it would be a limitation; it does not have appeared to have resolved the issue.  My current assumption is that was a problem just not the complete problem.
Below is a snippet of code that is working locally, but not in AppD. The specific line is `driver.quit()` try: time.sleep(1) elementName = '//*[@id="ClaimSearch:ClaimSearchScreen:_msgs"]' ... See more...
Below is a snippet of code that is working locally, but not in AppD. The specific line is `driver.quit()` try: time.sleep(1) elementName = '//*[@id="ClaimSearch:ClaimSearchScreen:_msgs"]' WebDriverWait(driver,30).until(EC.element_to_be_clickable((By.XPATH, "" + elementName + ""))) time.sleep(1) driver.quit() except: print("Did not see a Message") Continue with script... Is there a different way to quit?
I know there have been a lot of conversations around this topic and technology is constantly changing to make things easier, so I was curious if anyone has had recent experience ingesting all Azure l... See more...
I know there have been a lot of conversations around this topic and technology is constantly changing to make things easier, so I was curious if anyone has had recent experience ingesting all Azure logs to an on-premise Splunk instance. We're looking at both keeping logs local to Azure and utilize Sentinel but we'd prefer to keep them in Splunk so we have one location for logs no matter where they are (i.e. Azure, AWS, on-premise, etc.)  Potential solutions: Install a forwarder on every VM and back haul the traffic across the VPN to on-premise indexers. I don't want to pay the VPN costs for this or have that dependency. it's also just not very "cloud-ish" solution. Stand up indexers in Azure and replicate the clusters in Azure & on-premise. These VMs cost a lot of money. Send all logs to Event Hub and use Splunk to pull from there. This seems like a decent solution but I'm not sure of the costs or parsing issues this may entail.  I would love to hear how others compared Sentinel to Splunk and justified sticking with Splunk in Azure when you had an on-premise Splunk architecture. Note that we want the infrastructure/platform logs but have a hard requirement to get the OS and app logs (i.e. Windows security, RHEL /var/log/secure, Apache, Squid proxy, etc.) Thanks!
I have dropdown which has to execute the two different searches based on token picker  I am trying to implement the mechanism using the functionality below .Map is not returning any results |makere... See more...
I have dropdown which has to execute the two different searches based on token picker  I am trying to implement the mechanism using the functionality below .Map is not returning any results |makeresults | eval a = "index=_internal|head1" |eval b = "index=main|head1"|eval search1 = case($tok$=1,a ,1=1,b) |map search = "search $search1$"
Can the cluster command cluster based on more than one field? I know we can change which field to cluster by, but can we cluster by multiple fields?
I have a bunch of storage clusters that we monitor,  60% of the envrioment uses normal GB, the other 40% uses GiB.  I need to show all of the storage arrays in 1 report and normalize the storage to G... See more...
I have a bunch of storage clusters that we monitor,  60% of the envrioment uses normal GB, the other 40% uses GiB.  I need to show all of the storage arrays in 1 report and normalize the storage to GB, and the only field that is different between the storage besides the array name is "storage vendor" .  I need to create an If statement if vendor is like "X"  run these evals  |eval _GB_TiB = (((Capacity_GB)*1.1)/1024)*0.909495 | eval "Prov(TiB)" = (((prov_GB)*1.1)/1024)*0.909495 | eval "Written(TiB)" = ((((writtedGB)*1.1)/1024)*0.909495)/2    
HI there, I'm trying to redirect logs from syslog device to a separate index..   Does anyone see an error in this config?       [host::aaa.bbb.ccc.ddd] TRANSFORMS-juniper_change_index = junipe... See more...
HI there, I'm trying to redirect logs from syslog device to a separate index..   Does anyone see an error in this config?       [host::aaa.bbb.ccc.ddd] TRANSFORMS-juniper_change_index = juniper_change_index [juniper_change_index] SOURCE_KEY = MetaData:Host REGEX = (.*) DEST_KEY = _MetaData:Index FORMAT = juniper     Logs are still going to the main index.  I have other working transforms that operate on sourcetypes and other fields, but for some reason, I've been unable to get this one based on source address working. Thanks!
Hi, Based on the Splunk documentation to calculate the storage for accelerated data for a year is Accelerated data model storage/year = data volume per day * 3.4 We want to test this calculation... See more...
Hi, Based on the Splunk documentation to calculate the storage for accelerated data for a year is Accelerated data model storage/year = data volume per day * 3.4 We want to test this calculation and enabled summary range of 7 days in dev and found for 20 GB per day it's consumed 16.5 GB accelerated disk space.As per my understanding it should be Disk space for 7 days  = 20*3.4*7/365 = 1.31 GB Please let me know if I'm missing anything ?
Good morning I have a problem, when normalizing information related to a checkpoint, I find that I have a sourcetype: opsec:anti_malware but I manage to identify which ones are allowed / blocked / ... See more...
Good morning I have a problem, when normalizing information related to a checkpoint, I find that I have a sourcetype: opsec:anti_malware but I manage to identify which ones are allowed / blocked / dererred, I install the Splunk Add-on for Check Point OPSEC LEA, Has anyone had the same problem ? 
Hi All, We are running with Splunk Cloud 7.2.9.1 version in our environment. And now we are planning to upgrade the same to 8.0. and above version. So I have logged a ticket to Splunk Support for u... See more...
Hi All, We are running with Splunk Cloud 7.2.9.1 version in our environment. And now we are planning to upgrade the same to 8.0. and above version. So I have logged a ticket to Splunk Support for upgrading the core Splunk Cloud they said to review the Cloud Monitoring Console App installed in the Search head and then have navigated to Splunk Upgrade Dashboard. Splunk App Compatibility Summary Forwarder Compatibility Forwarder Count by Status Here in the Forwarder Count by Status i can see under Provisional some 10 client machines and in Upgrade Needed i can see around 20 client machines. So when i viewed the list i came to know that most of the servers are Windows 2003 OS and they are running with Splunk Forwarder version of 6.2.15 and few of them are RHEL 5 (5.11) running with 6.5.1 Splunk Forwarder version. So as of now teams are working to decommission this old servers but it might take few months but still I want to know If we upgrade the core Splunk Cloud to 8.0 and above will be the client machines running with OS (Win2k3 & RHEL 5 (5.11)) and Splunk Forwarder versions (6.2.15 & 6.5.1) are able to ingest logs into Splunk Cloud without any issues? Kindly help to know on my request.  
Hello, fellow splunkers! I am trying to find a search string where I could define a variable & then use it in the same search. Example:     var1=some_value; var2=some_value; | index="$var1-app0... See more...
Hello, fellow splunkers! I am trying to find a search string where I could define a variable & then use it in the same search. Example:     var1=some_value; var2=some_value; | index="$var1-app01-$var2" OR index="$var1-app02-$var2" OR index="$var1-app03-$var2" "error" OR "severe"     Our current Splunk setup has too many indexes per customer/environment & this little feature would help a lot with unifying the searches. I tried to browse the web/this forum and unfortunately did not find this or a similar issue. Any help is appreciated, thank you!
We have a multi-site (Production - Site 1, DR - Site 2) Enterprise Security deployment with clustered indexing. Search clustering is not enabled since we have 1 ES SH + 1 Ad-Hoc SH, each in Prod and ... See more...
We have a multi-site (Production - Site 1, DR - Site 2) Enterprise Security deployment with clustered indexing. Search clustering is not enabled since we have 1 ES SH + 1 Ad-Hoc SH, each in Prod and DR. We'll pursue clustering at a later point in time once more users begin adopting Splunk and requirements grow more stringent. We're trying to come up with a solution to make the transition from Prod --> DR simpler in the event of a disaster. If traffic needs to get routed to the Cluster Master, Deployment Server, License Master, and Monitoring Console located on the DR site in the event that they are unavailable, where should I configure the CNAME and map that to the DR hostnames for these Splunk components? Creating the CNAME DNS records for the CM, DS, LM, MC is the easy part but we're just unsure if the CNAMEs need to be identified in a conf file or elsewhere...
Hello! I'm using Splunk 7.3.2 (also tried on 7.2.5.1 and 7.2.6). I'm replicating the Excel hide/show column feature for Splunk tables, and I am using the fields tag to do it.  By associating the fi... See more...
Hello! I'm using Splunk 7.3.2 (also tried on 7.2.5.1 and 7.2.6). I'm replicating the Excel hide/show column feature for Splunk tables, and I am using the fields tag to do it.  By associating the fields value with a token I should technically be able to hide and show table fields for already loaded data rather than having it specify which table fields (| table $tok_fields$) therefore forcing the search to re-run.  I've managed to get it working but it is very unreliable.  Here is a run-anywhere dashboard: <form> <init> <set token="tok_fields">["source"]</set> </init> <label>Hide Table Fields</label> <fieldset submitButton="false"> <input type="checkbox" token="tok_show_user"> <label></label> <choice value="*">Show user</choice> <change> <eval token="tok_fields">if($tok_show_user$="*","[\"source\",\"user\"]","[\"source\"]")</eval> </change> </input> </fieldset> <row> <panel> <table> <search> <query>index=_internal | head 10 | table source user</query> <earliest>-10min@min</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <fields>$tok_fields$</fields> </table> </panel> </row> </form> Now when you save it, it may or may not work.  What I have noticed is that it works in edit mode, so if you load the dashboard, go into edit mode, then cancel edit mode, it'll continue to work until the dashboard is reloaded.  If, on the other hand, you make a change to the dashboard and save, then Splunk decides to hardcode the fields value meaning you have to go back and replace it with the token again. How can I make this work reliably?  Did I stumble across a bug? Thanks! Andrew
Hi all, Can someone share with me an image link for the flow diagram of Splunk Cloud integration with AWS? Simple and Complex
Hi all, I want to ask the pricing for Splunk cloud because I was confused about where the truth pricing is.  
Hi, I want to show the % symbol along with the number values in the bar chart.   Attached chart only shows the number for each state as 500 1, etc, but I want to display it as 500%, 1% and 8749% ... See more...
Hi, I want to show the % symbol along with the number values in the bar chart.   Attached chart only shows the number for each state as 500 1, etc, but I want to display it as 500%, 1% and 8749%   My code is -- index=* linecount=1 "status.value"=* | chart sum(linecount) as Count_In_Percentage by status.value
Dear Folks, I've the below two different type of events, the matching attributes from first event to second event are, (mainModify, headerIdSelection) <=> (modify, headerId) <ar-1> [log@50 STAGE="d... See more...
Dear Folks, I've the below two different type of events, the matching attributes from first event to second event are, (mainModify, headerIdSelection) <=> (modify, headerId) <ar-1> [log@50 STAGE="dev" ACTION="SUBMISSION" TRX="[Input{selectedId='null', selectedProductId='null', isUser=false, selectionInput=SelectionInput [CreateId=5555555, technicalId=999999, modify=2015-09-01-10.03.23.075286, currency=USD, amount=200, headerIdSelection=3452345245, createdTicket=2020-06-7-13.06.53.232320\], client=false}, SelectionOutput{mainModify='2020-06-06-13.08.04.204797', technicalId='null', modify='null'}\]" EVENT="SELECTION" USER_ID="Eer343b"] instance="bar"] Log <ar-1> [log@50 STAGE="dev" ACTION="SELECTION" TRX="[Input{selectedId='1111111111', selectedProductId='00000', propertyId='null', isUser=false, client=false}, SelectionCollection{ProcNumber='222222222', productId='00000', allSelection=[ProductDetails{validity=24, percent=0.59000000, modify='2020-06-06-13.08.03.934946', headerId='3452345245'}, ProductDetails{validity=3, percent=0.57, modify='2020-06-06-13.08.04.158208', headerId='3452345245'}, ProductDetails{validity=9, percent=0.57, modify='2020-06-06-13.08.04.168807', headerId='3452345245'}, ProductDetails{validity=12, percent=0.58, modify='2020-06-06-13.08.04.204797', headerId='3452345245'}, ProductDetails{validity=15, percent=0.63, modify='2020-06-06-13.08.04.221864', headerId='3452345245'}, ProductDetails{validity=20, percent=0.69, modify='2020-06-06-13.08.04.252901', headerId='3452345245'}, ProductDetails{validity=25, percent=0.71, modify='2020-06-06-13.08.04.263227', headerId='3452345245'}, ProductDetails{validity=100, percent=0.73, modify='2020-06-06-13.08.04.298523', headerId='3452345245'}\]}\]" EVENT="SELECTION" USER_ID="Eer343b" instance="bar"] Log I need to extract the output as a table like below, CreateId, | technicalId | mainModify | headerId | selectedId | selectedProductId | validity | percent 5555555 | 999999 | 2020-06-06-13.08.04.204797 | 3452345245 | 1111111111 | 00000 | 12 | 0.58 The selection of validity & percent attributes on the second event must be matching to the specific ProductDetails element to the mainModify and headerIdSelection from the first event. Could you please guide, how can I acheive such result? I tried with the join of first event to the second event as inner join but couldn't get it work. Thank you in advance for the help.