All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have plan to install Splunk Enterprise SIEM in the cyber security operation center, and universal forwarder will be installed on each workstation in order to transmit windows event log. From ... See more...
Hi, I have plan to install Splunk Enterprise SIEM in the cyber security operation center, and universal forwarder will be installed on each workstation in order to transmit windows event log. From what I studied at the splunk site, it seems that I can design Architecture 1 or 2 as shown in the picture below.   I would like to know the pros and cons of using a heavy forwarder because I need to purchase an additional server to install Heavy Forwarder. Also, I want to get technical support for purchase from korea engineer. Could you please give me email address for technical support? I could not find email address about korea engineer in splunk website.    Best regards,
We are using the latest version of lookup file editor (3.4.6) and Splunk cloud version (8.1.2012.1).  I couldn't edit/save any of the lookup files in the lookup editor. I tried uploading the .csv, b... See more...
We are using the latest version of lookup file editor (3.4.6) and Splunk cloud version (8.1.2012.1).  I couldn't edit/save any of the lookup files in the lookup editor. I tried uploading the .csv, but now luck.  I get an error saying " you do not have permission", although there are no issues with the permission as I have admin rights. when I tried to change the permission (for testing), I didn't let me save it. I'm getting the below error.  Splunk could not update permissions for resource data/lookup-table-files [HTTP 500] Splunkd internal error; [{'type': 'ERROR', 'code': None, 'text': "This handler does not support the 'edit' action and cannot be used for ACL modification."}. Please suggest a remedy for this. 
We have a 7 node SH multisite cluster, behind a VIP/LB. The ML team are coming up against limits for searches - some of which can be set on a user/search basis. However some limits are global (e.g. s... See more...
We have a 7 node SH multisite cluster, behind a VIP/LB. The ML team are coming up against limits for searches - some of which can be set on a user/search basis. However some limits are global (e.g. subsearch) and we cannot change these settings without risking platform instability. The plan is to create a dedicated search head for the ML team. It would be part of the cluster but not behind the VIP. The ML team would get a separate GUI and REST VIP which would target the new SH. The extra SH would mean an even number but I think this is OK if we ensure there is always an odd number in each site (5/3 in our case) Does this sound like a sensible solution?       
I'm trying to figure out to calculate the network utilization on this server using the eval and stats and I'm having problem figuring out the correct search and expression, can you help me.   i... See more...
I'm trying to figure out to calculate the network utilization on this server using the eval and stats and I'm having problem figuring out the correct search and expression, can you help me.   index=* sourcetype="Perfmon:Network Interface" source="Perfmon:Network Interface" counter=* | eval bytes_out = "Bytes Sent/sec", bytes_in = "Bytes Received/sec" | eval Bandwidth = bytes_out + bytes_in | timechart count by Bandwidth limit=10   I need to convert the bytes in mbps, as well, I would really appreciate some help here, Thank you in advance.
Hi everyone, I have the below string. isadhakdahdj asdh, hosadhao activity=Follow Up, entryName=Initial Outreach, asasa adadad oidaoidadalnd. I want to extract . activity=Follow Up entryName=Init... See more...
Hi everyone, I have the below string. isadhakdahdj asdh, hosadhao activity=Follow Up, entryName=Initial Outreach, asasa adadad oidaoidadalnd. I want to extract . activity=Follow Up entryName=Initial Outreach activity & entryName are static, but value of that may be dyna
Hi  Everyone, I have one requirement.  As of now I have put Auto Refresh for my panel for 5 sec. My requirement is I want to create a checkbox for Auto Refresh which when  checked by user will sta... See more...
Hi  Everyone, I have one requirement.  As of now I have put Auto Refresh for my panel for 5 sec. My requirement is I want to create a checkbox for Auto Refresh which when  checked by user will start Auto Refresh for every 5 sec and will close in 5 minutes automatically. By Default the check box should ne unchecked. Below is my code: <form theme="dark"> <label> Process Dashboard Auto Refresh</label> <fieldset submitButton="true" autoRun="true"> <input type="time" token="field1" searchWhenChanged="true"> <label>Date/Time</label> <default> <earliest>-15m</earliest> <latest>now</latest> </default> </input> <input type="text" token="process_tok1"> <label>Processor Id</label> <default>*</default> </input> <input type="text" token="ckey" searchWhenChanged="true"> <label>Parent Chain</label> <default></default> <prefix>parent_chain="*</prefix> <suffix>*"</suffix> <initialValue></initialValue> </input> <input type="text" token="usr"> <label>User</label> <default>*</default> </input> <input type="checkbox" token="auto"> <label>Auto Refresh</label> </input> </fieldset> <row> <panel> <table> <search> <query>index=abc  sourcetype=xyz  source="user.log" $process_tok1$ | rex field=_raw "(?&lt;id&gt;[A_Za-z0-9]{8}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{4}[\-][A_Za-z0-9]{12})" | join type=outer id [inputlookup parent_chains_e1.csv]|search $ckey$|search $usr$|eval ClickHere=url|rex field=url mode=sed "s/\\/\\//\\//g s/https:/https:\\//g" | table _time _raw host id parent_chain url</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5s</refresh> <refreshType>delay</refreshType> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <condition field="url"> <link target="_blank">$row.url|n$</link> </condition> </drilldown> </table> </panel> </row> </form> Can someone guide me how to achieve, Is that possible to achieve.
Hi Team, I have one requirement. I have raw logs as shown below: 2021-02-12 09:22:32,936 INFO [ Web -4092] AuthenticationFilter Attempting request for (<asriva22><lgposputb500910.ghp.bcp.com><CN=l... See more...
Hi Team, I have one requirement. I have raw logs as shown below: 2021-02-12 09:22:32,936 INFO [ Web -4092] AuthenticationFilter Attempting request for (<asriva22><lgposputb500910.ghp.bcp.com><CN=lgposputb50010.ghp.aexp.com, OU=Middleware Utilities, O=ABC  Company, L=Phoenix, ST=Arizona   2021-02-12 09:22:38,689 INFO [ Web -4099] o.a.n.w.s.AuthenticationFilter Authentication success for smennen   2021-02-12 08:45:05,277 INFO [Web -3253] o.a.n.w.s.AuthenticationFilter Attempting request for (<JWT token>) GET https://ebac/api/flow/controller/bulletins I want to remove highlighted time from the logs. How can I do that. Thanks in Advance
This is weird to me - I'm not getting the same information when I create a dashboard from events than when I do it through the spl editor - it's like the same events are not being interpreted the sam... See more...
This is weird to me - I'm not getting the same information when I create a dashboard from events than when I do it through the spl editor - it's like the same events are not being interpreted the same way.  I've never seen this happen like this before.... The events are Microsoft:0365 events and I'm finding when I run it through SPL it works fine but when I transpose the same query to an editor some of the fields like dvc are null...  Both the query and dashboard are created under Search and Reporting... so I'm not getting this...  The query is a bit of a new venture for me - it's a join between the events and a union on two ldapsearches (to establish the users whose events I want to extract.. but I don't get this... 
I have to bring 2 different numerical fields in one column name .I am fetching the fields from a view . Example :I have fields like  below , field 1 =Compliant_machines(which has count of compliant... See more...
I have to bring 2 different numerical fields in one column name .I am fetching the fields from a view . Example :I have fields like  below , field 1 =Compliant_machines(which has count of compliant machines ) field2 =Non_Compliant_machines(which has count of compliant machines ) I have to bring this like below , Compliance sum(Count ) Compliant_Machines    123456 Non_Compliant_machines 57421   I have acheived this using transpose ,but the drill down is not working using this transpose and stats . I am used this code for drill down ,  |search Account_Environment="*" Acc_Name="*" Baseline="*" Rule="*" |stats sum(Compliant_Tested) as Compliant sum(Noncompliant_Tested) as "Non Compliant"|transpose|rename column as "Compliance", "row 1" as "count"| search Compliance="Compliant"|table Acc_Name Acc_No Baseline form Rule " etc The above code didnt work for me .Can someone help me to achieve this in drill down .    
Hi I have two date fields that show up in my dash board panel that lists events after visualisation panels. "2021-11-02 16:53:38" and "11/02/21 at 16:52:37" I am trying to find a way to reformat th... See more...
Hi I have two date fields that show up in my dash board panel that lists events after visualisation panels. "2021-11-02 16:53:38" and "11/02/21 at 16:52:37" I am trying to find a way to reformat the second date (right) to be like the first. YYYY-MM-DD hh:mm:ss Is there an easy way? This is a search query inside a table listing. Thank you
Hello Community, Since Upgrade to 8.1.1 I'm getting following error message on my search head cluster: "Unable to initialize modular input "relaymodaction" defined in the app "Splunk_SA_CIM": Intro... See more...
Hello Community, Since Upgrade to 8.1.1 I'm getting following error message on my search head cluster: "Unable to initialize modular input "relaymodaction" defined in the app "Splunk_SA_CIM": Introspecting scheme=relaymodaction: script running failed (exited with code 1)." Does anyone have an idea how to fix that. Thanks, cheers harald
I've gone through and setup up instrumented java applications where the full app is able to be instrumented and created just through the controller call, but not having the same success with python a... See more...
I've gone through and setup up instrumented java applications where the full app is able to be instrumented and created just through the controller call, but not having the same success with python agents. Following the guide (https://docs.appdynamics.com/display/PRO42/Install+the+Python+Agent), I'm able to succesfully link the agent, but only when using the getting started wizard and using the provided node name. If I don't use the wizard and try to launch otherwise, I see a succesful controller connection that picks up the name and tier, but no actual data or metrics being passed (e.g. everything is at 0).  Is there a way to create python agent apps without the getting started wizard? 
Hello, I have an automated upgrade plan that does the following: Puts the cluster in maintenance mode splunk enable maintenance-mode Goes 1 by 1 on each of the 3 indexer peers and runs: splun... See more...
Hello, I have an automated upgrade plan that does the following: Puts the cluster in maintenance mode splunk enable maintenance-mode Goes 1 by 1 on each of the 3 indexer peers and runs: splunk offline Extracts upgrade tar file to necessary location runs splunk start and accepts license and answers yes. repeats for the next peer Disables maintenance mode. I am trying to upgrade the peers without the end users seeing messages but unfortunately users see things like the following:   Unable to distribute to peer named X because peer has status=Down. Verify uri-scheme, connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. ^ Even though the peer is Up according to the Cluster Master   Connection Refused for peer=X ^ Which seems like the search heads are sending queries or still have an established connection with the peer. I would expect the search head to know that a peer is down and not communicate with it till it indexes have been validates and deemed searchable.   Anyone have recommendation on making the indexer upgrade as seamless to the end user as possible? Things tried: adjusted the restart_timeout, quiet_period, and decomission_node_force_timeout on the cluster master Thanks, J
Here is my environment Cluster Master, License Master, Deployment Server (on one Splunk instance) Cluster of 3 indexes  Separate Search Head Noticed when I checked the Forwarder Manager in my dep... See more...
Here is my environment Cluster Master, License Master, Deployment Server (on one Splunk instance) Cluster of 3 indexes  Separate Search Head Noticed when I checked the Forwarder Manager in my deployment server  my clients had not phoned home in 8 hours.  Then I ran  index = _internal httppubsubconnection "uri=/services/broker/phonehome" to see if there were any errors phoning home but to my surprise everything was good. In Forwarder Management I also deleted a record and it came right back which also confirmed a successful phone home but it said 8 hours ago.  Ran other searches and the event time and log times are good. Then I noticed in my search history that the previous search I conducted was done 8 hours ago even though I just ran them.  Played with time zone in user preference but nothing. Any suggestions on why everything in Splunk is 8 hours behind when it comes to phoning home and when a search was conducted.
How would I take a 24 hour search such as: index=* | iplocation src_ip | stats count by src_ip, Country, dest_ip, dest_port | sort 10 -count     that can effectively use the IPs from a 24 hour search... See more...
How would I take a 24 hour search such as: index=* | iplocation src_ip | stats count by src_ip, Country, dest_ip, dest_port | sort 10 -count     that can effectively use the IPs from a 24 hour search, to search for the traffic in the last hour (using the same 10 IPs from the 24hr not the most popular from the last hour)? I have tried to bucket _time span=1hr, but that just searches through for the most active ips in the last hour and changes what I need. Any suggestions are appreciated!
I want to get a per second average over a period of time. I am running into an issue getting an average of these values. For example one item only occurs 4 or so times and when I calculate the averag... See more...
I want to get a per second average over a period of time. I am running into an issue getting an average of these values. For example one item only occurs 4 or so times and when I calculate the average I get 1; it is not considering the seconds where there are no results. I am doing the following now:   | bin _time span=1s |stats count as Vol by foo | stats avg(vol) by foo   I would expect to get a very low decimal due to all of the 0 values but, I get 1. I have attempted multiple ways of doing this but it seems that it is only doing an average of the existing values, not the times where I do not have results.
Hi, I am trying to upgrade Splunk enterprise version 7.3.2 which has forwarder enabled to the new version 8.1.2 The upgrade fails and rolls back.  I did try to run the installation as admin user. ... See more...
Hi, I am trying to upgrade Splunk enterprise version 7.3.2 which has forwarder enabled to the new version 8.1.2 The upgrade fails and rolls back.  I did try to run the installation as admin user.  Can you please let me know the possible causes and how to fix this.  I can see the following error in migration logs  Failed cli cmd _py_internal Any help is appreciated
I have one index idx1 and other index idx2 and a common column "A" on which matching needs to be done. I'm facing difficulty in combining the data from both the columns. I've to combine the data in... See more...
I have one index idx1 and other index idx2 and a common column "A" on which matching needs to be done. I'm facing difficulty in combining the data from both the columns. I've to combine the data in such a way that if there is duplicate then the data from idx1 must be prioritized over data from idx2; i.e. basically equivalent of set operation [a+(b-a)]. I've tried the following : | set diff [ search index=idx2 sourcetype=src | dedup A ] [search index=idx1 sourcetype=src | dedup A ] | stats count BY index A | table index A Here I get total 10840 statistics with both columns filled. But when I want to display other columns from both the indexes I get empty columns for those. Upon executing : | set diff [ search index=1idx1 sourcetype=src | dedup A ] [search index=idx2 sourcetype=src | dedup A ] | stats count BY index I get the output as index count idx1 4791 idx2 6049   Can anyone help me how should I proceed?? I've tried even this but not sure index=idx1 sourcetype=src | append [ | set diff [ search index=idx2 sourcetype=src | dedup A ] [search index=idx1 sourcetype=src | dedup A ]] | stats count BY index A | table index A    
HI,  I added datasource in SSE , and  it display status as : Analyzing CIM and Event Size but it was like hours like this when i do a serach for the term search , it give me data in return , but n... See more...
HI,  I added datasource in SSE , and  it display status as : Analyzing CIM and Event Size but it was like hours like this when i do a serach for the term search , it give me data in return , but nothing in data inventory  did you know an action to do for that? Thank you