All Topics

Top

All Topics

Hi, I have a question and i hope received anwser the soon. I am using Splunk Enterprise and install in server CentOS 7. Openssh is using vesion 7.4 and 8.1. I want update openssh in all splunk serv... See more...
Hi, I have a question and i hope received anwser the soon. I am using Splunk Enterprise and install in server CentOS 7. Openssh is using vesion 7.4 and 8.1. I want update openssh in all splunk servers (8 server CentOS 7 include 2 search head cluster, 2 indexer cluster, 2 heavy forwarder, 1 deployment server and 1 master node) from 7.4, 8.1 to lastest openssh version still supported on CentOS 7. Version of splunk enterprise use is 8.0.7.  I would like to ask what effect the upgrade will have on Splunk's performance and what to prepare on Splunk before updating OpenSSH Thanks for all!
Hi, I am a bit new to the Splunk community and interested in building a Splunk app that can process host-level log data (particularly logs produced by audit D).  My end goal is to provide some anal... See more...
Hi, I am a bit new to the Splunk community and interested in building a Splunk app that can process host-level log data (particularly logs produced by audit D).  My end goal is to provide some analysis of the host log and report that back to the user in the Splunk dashboard. I am unsure how to do the first step of ingesting data from the host machine into the app.
Brand news servers. Not receiving all data from the UF. Confirmed connectivity. Confirmed inputs via "/opt/splunkforwarder/bin/splunk btool inputs list | grep bc_ | grep "\["", Only getting 2 sour... See more...
Brand news servers. Not receiving all data from the UF. Confirmed connectivity. Confirmed inputs via "/opt/splunkforwarder/bin/splunk btool inputs list | grep bc_ | grep "\["", Only getting 2 sourcetypes when there should be at least 16 for the index. Getting this error message: Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Getting this when starting splunkd: Splunk> Take the sh out of IT.   Checking prerequisites...         Management port has been set disabled; cli support for this configuration is currently incomplete.         Checking conf files for problems...                 Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false).                 Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'         Done         Checking default conf files for edits...         Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.0.3-dd0128b1f8cd-linux-2.6-x86_64-manifest'         All installed files intact.         Done All preliminary checks passed.   Starting splunk server daemon (splunkd)... Done
I am having issue finding a way to standardize email for a query that will make the output "First Last" to a new field.  there are mainly two email types in "first.x.last@domain.com" or "first.las... See more...
I am having issue finding a way to standardize email for a query that will make the output "First Last" to a new field.  there are mainly two email types in "first.x.last@domain.com" or "first.last@domain.com" The first works for "first.x.last@domain.com":  | makeresults | eval name="first.x.last@domain.com" | rex field=name "^(?<Name>[^@]+)" | eval tmp=split(Name,".") | eval tmp2=split(Name,".") | eval FullName=mvindex(tmp,0) | eval FName=mvindex(tmp2,2) | table FullName FName | eval newName=mvappend(FullName,FName) | eval FN=mvjoin(newName, " ") | table FN  And this for "first.last@domain.com" | makeresults | eval name="first.last@domain.com" | rex field=name "^(?<Name>[^@]+)" | eval tmp=split(Name,".") | eval FullName=mvindex(tmp,0,1) | eval FN=mvjoin(FullName, " ") | table FN Any recommendations of how to accomplish getting an output of "First Last" to one field for both email types? 
I have the below SPL with the regex, which i was using as a horse shoe visualization, but im trying to convert it to a stacked bar graph with the log level per process log level should have different... See more...
I have the below SPL with the regex, which i was using as a horse shoe visualization, but im trying to convert it to a stacked bar graph with the log level per process log level should have different colours for the different log levels red-ERROR, green for info blue for debuf etc.. should be trellis visual.. index="intau_workfusion" sourcetype=workfusion.out.log host=* | rex "^(?<Date>\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s*\[(?<Process>[^\]]*)\]\s*\[(?<Step>[^\]]*)\]\s*\[(?<User>[^\]]*)\]\s*[^\[]+\s\[(?<Log_level>[^\]]+)" | search Log_level="ERROR" | where Process != "" | eval hour=strftime(_time,"%H") | where hour >= 5 AND hour < 18 | eval day=strftime(_time,"%w") | where day >= 1 AND day <= 5 | bin _time span=1d | stats count AS ERRORS by Process | sort - count asc
 I'm trying to forward Cisco SNMP traps to my test Splunk environment.  I'm seeing the data that I wanted when I go to var/log/syslog but if I do my search on the Splunk console I'm getting gibberish... See more...
 I'm trying to forward Cisco SNMP traps to my test Splunk environment.  I'm seeing the data that I wanted when I go to var/log/syslog but if I do my search on the Splunk console I'm getting gibberish data.  What else should I look into?
I am familiar with the vulnerability/advisories for splunk here: https://advisory.splunk.com/advisories  However, this appears to be primarily for splunk for splunk developed apps. I have yet to se... See more...
I am familiar with the vulnerability/advisories for splunk here: https://advisory.splunk.com/advisories  However, this appears to be primarily for splunk for splunk developed apps. I have yet to see one show up for add-ons or 3rd party apps. Is there a place where I can find if there are advisories or vulnerabilities reported for apps and add-ons? 
Hi all, I'm looking for the best method to collect DNS logs and specifically the DNS queries and answers logs. I see there is a preliminary set up in named.conf to enable the logs of the queries ... See more...
Hi all, I'm looking for the best method to collect DNS logs and specifically the DNS queries and answers logs. I see there is a preliminary set up in named.conf to enable the logs of the queries an where to write them as: options { querylog yes; } and logging { channel querylog { file "/var/log/dns.log"; severity debug 3; }; }; Does this  the logs will be in dns.log file , and now what is the best method to ingest them into SPLUNK with the right format mapping?  What is your experiences with Linux DNS service? I'm collect events with Splunk Deployment server + Heavy Forwarder  -->Splunk  ES CLOUD . Thank you. R  
Below is a colorpalette expression I have in my classic dashboard: <format type="color" field="Present"> <colorPalette type="expression"> if(value &gt; $90PercentThreshold$, "#f50d0d", "#43e40a"... See more...
Below is a colorpalette expression I have in my classic dashboard: <format type="color" field="Present"> <colorPalette type="expression"> if(value &gt; $90PercentThreshold$, "#f50d0d", "#43e40a") <green and red, respectively> </colorPalette> </format> My query is: "<sourceString>" "Checking <directory>*" | rex field=_raw "InputObject\";\s+value=\"Checking (?&lt;Directory&gt;[^\"]+) Max allowed:\s+(?&lt;Max_Allowed&gt;\d+).*Files present:\s+(?&lt;Present&gt;\d+)" | dedup Directory | eval 50PercentThreshold = Max_Allowed * 0.5 | eval 90PercentThreshold = Max_Allowed * 0.9 | table _time, Directory, Present, Max_Allowed | sort - Present This is getting a file count across several directories as well as the maximum allowed files in that directory (all within the log)\. My issue is that I can see the cells in the table colored during edit mode, but when I switch to preview, the colors are not present any longer (no color exists). Additionally, is there a way to express multiple conditionals for the colorpalette? An example would be the following, but I must have the wrong syntax: <format type="color" field="Present"> <colorPalette type="expression"> if(value &gt; $90PercentThreshold$, "#f50d0d", if(value &gt; $50PercentThreshold$ AND value &lt; $90PercentThreshold$, "#f2bb11", "#43e40a")) </colorPalette> </format> Thanks in advance!  
We are getting jenkins logs to splunk ,I want to create stage panel for each branch of job ,I want table like 1.stage name 2.branch 3.total builds 4.success builds 5.failed builds ,can you give searc... See more...
We are getting jenkins logs to splunk ,I want to create stage panel for each branch of job ,I want table like 1.stage name 2.branch 3.total builds 4.success builds 5.failed builds ,can you give search query
I currently have this search right now, and I apologize in advance for my poor spl. I would like to know how to run this with whatever the current date is in YYYY-MM-DD as I am trying to just get the... See more...
I currently have this search right now, and I apologize in advance for my poor spl. I would like to know how to run this with whatever the current date is in YYYY-MM-DD as I am trying to just get the employees leaving on the current day that ran. | inputlookup listofemployees.csv | search Last_Day_of_Work="$Todays Date$" | table Employee_ID, Last_Day_of_Work, effective, firstName, lastName Please let me know, thank you for your help.
Dear Splunk Community, I have tried somehow to monitor user activities with Splunk. Through the documentation I found that I can analyze it through index=_audit, however, in these records there are... See more...
Dear Splunk Community, I have tried somehow to monitor user activities with Splunk. Through the documentation I found that I can analyze it through index=_audit, however, in these records there are activities that I have not carried out directly. For example, if I apply the query: "index=_audit user=my.user | stats count by user,action" in the last 24 hours, the result will show actions like: edit_local_apps, search, list_workload_pools, list_health, quota, edit_roles, edit_roles_grantable, etc. And of those, the only activity that I performed directly was "search". Perhaps you know how to discriminate from all the audited actions those that I carried out directly?
Hi,   I got confused when running the following search to identify what are the enabled searches in the environment :  | rest splunk_server=local count=0 /services/saved/searches | where match('a... See more...
Hi,   I got confused when running the following search to identify what are the enabled searches in the environment :  | rest splunk_server=local count=0 /services/saved/searches | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | rename eai:acl.app as app, title as csearch_name, action.correlationsearch.label as csearch_label, action.notable.param.security_domain as security_domain | table csearch_name, csearch_label, app, security_domain, description Because I got a complete different result when I added: disabled=0   Apparently, there are correlation searches with action.correlationsearch.enabled=1 and disabled=1 at the same time... what does that mean? I found the searches disabled from the content management, so why is the action.correlationsearch.enabled equals to 1?
Hi, Below red highlighted is sample log file. Sample LogFile 12:08:32.797 [6] (null) DEBUG Bastian.Exacta.AMAT.ImportAdapter.Wcf.AMATWcfImport - JSON received for product import: {"records":[{"lgn... See more...
Hi, Below red highlighted is sample log file. Sample LogFile 12:08:32.797 [6] (null) DEBUG Bastian.Exacta.AMAT.ImportAdapter.Wcf.AMATWcfImport - JSON received for product import: {"records":[{"lgnum":"407","entitled":"4070","owner":"4070","product":"0205-02304","prd_descr":"PACKAGING, RUNNING BEAM GRIPPERS, REFLEX","base_uom":"EA","gross_weight":"0.000","net_weight":"1.000","weight_uom":"KG","volume":"6480.000","volume_uom":"CCM","length":"40.000","width":"18.000","height":"9.000","dimension_uom":"CM","serial_profile":null,"batch_req":null,"cycle_count_ind":"C","alternative_uom":"EA","shelf_life_flag":null,"shelf_life":null,"req_min_shelf_life":null,"req_max_shelf_life":null,"std_cost":"10.61","matnr":"0205-02304","suffix":null,"rev_level":"01","extension":null}]} 12:08:32.797 [6] (null) DEBUG Bastian.Exacta.Business.Xml.XmlEntity - Started saving XML entity of type 'ProductImportData' 12:08:32.844 [6] (null) DEBUG Bastian.Exacta.Business.Xml.XmlEntity - Finished XML entity of type 'ProductImportData'. Result: <?xml version="1.0" encoding="utf-16" standalone="yes"?> <PROD NAME="0205-02304"> 14:54:00.242 [8] (null) DEBUG Bastian.Exacta.AMAT.ImportAdapter.Wcf.AMATWcfImport - JSON received for order line cancel import: {"records":[{"Header":{"lgnum":"407","who":"47708597","canrq":"X"},"Detail":[{"tanum":"97908517"}]}]} 14:54:00.242 [8] (null) DEBUG Bastian.Exacta.Business.Persistance.SessionFactory - Opening NHibernate session using the production factory... 14:54:00.258 [8] (null) DEBUG NHibernate.SQL - select order0_.ORDER_TYPE as col_0_0_ from ORDER_HEADER order0_ where order0_.ORDER_NAME=@p0 ORDER BY CURRENT_TIMESTAMP OFFSET 0 ROWS FETCH FIRST 1 ROWS ONLY;@p0 = '47708597' [Type: String (4000:0:0)] 14:54:00.273 [8] (null) DEBUG Bastian.Exacta.Business.Persistance.SessionFactory - Closing NHibernate session... 14:54:00.273 [8] (null) INFO Bastian.Exacta.AMAT.ImportAdapter.Wcf.AMATWcfImport - Creating order cancellation transaction for order 47708597, OrderType : 0 14:54:00.289 [8] (null) DEBUG Bastian.Exacta.Business.Persistance.SessionFactory - Opening NHibernate session using the production factory... 14:54:00.320 [8] (null) DEBUG NHibernate.SQL - select orderline1_.ORDER_LINE_ID as order1_236_, orderline1_.ORDER_LINE_TYPE as order2_236_, orderline1_.LINE_NUM as line3_236_, orderline1_.LOT_NUM_REQUESTED as lot4_236_, orderline1_.QTY_REQUESTED as qty5_236_, orderline1_.UOM_SPECIFIED as uom6_236_, orderline1_.SERIAL_NUM_REQUESTED as serial7_236_, orderline1_.SINGLE_LOT as single8_236_, orderline1_.DAYS_TO_EXPIRE as days9_236_, orderline1_.VAS as vas10_236_, orderline1_.KITTING as kitting11_236_, orderline1_.DEST_ZONE as dest12_236_, orderline1_.SOURCE_ZONE as source13_236_, orderline1_.SEQ_NUM as seq14_236_, orderline1_.RETURNED_INV as returned15_236_, orderline1_.WGT_REQUESTED as wgt16_236_, orderline1_.INVENTORY_GROUP as inventory17_236_, orderline1_.TOTAL_RECEIPT_QUANTITY as total18_236_, orderline1_.LOT_REVISION as lot19_236_, orderline1_.SERIAL_NUM_REQUIRED as serial20_236_, orderline1_.CAPTURE_COUNTRY_OF_ORIGIN as capture21_236_, orderline1_.SECONDARY_SCAN_TYPE as secondary22_236_, orderline1_.SUPPRESS_SCANS_AT_PICK as suppress23_236_, orderline1_.SHOULD_PICK_RESERVED_INVENTORY as should24_236_, orderline1_.QUAR_REASON as quar25_236_, orderline1_.INVOICE_NUMBER as invoice26_236_, orderline1_.INVENTORY_RESERVATION_KEY as inventory27_236_, orderline1_.SSU_VALUE_PER_ITEM as ssu28_236_, orderline1_.PROD_ID as prod29_236_, orderline1_.UOM_TYPE_REQUESTED as uom30_236_, orderline1_.ORDER_ID as order31_236_, orderline1_.WAVE_ID as wave32_236_, orderline1_.ROUTE_ID as route33_236_, orderline1_.DOCK_ID as dock34_236_, orderline1_.DEST_WAREHOUSE_ID as dest35_236_, orderline1_.SOURCE_WAREHOUSE_ID as source36_236_, orderline1_.DOCUMENT_ID as document37_236_, orderline1_.ADJUSTMENT_ORDER_ID as adjustment38_236_, orderline1_.BOM_ID as bom39_236_, orderline1_.BOM_LINE_ID as bom40_236_, orderline1_.BOM_PARENT_LINE_ID as bom41_236_, orderline1_.PREFERRED_CNTNR_PATTERN_ID as preferred42_236_, orderline1_.COUNTRY_OF_ORIGIN as country43_236_ from ORDER_LINE_DETAIL orderlined0_ inner join ORDER_LINE orderline1_ on orderlined0_.ORDER_LINE_ID=orderline1_.ORDER_LINE_ID inner join ORDER_HEADER order2_ on orderline1_.ORDER_ID=order2_.ORDER_ID where order2_.ORDER_NAME=@p0 and orderlined0_.DETAIL_TYPE=@p1 and (orderlined0_.DETAIL_VALUE in (@p2));@p0 = '47708597' [Type: String (4000:0:0)], @p1 = 1000 [Type: Decimal (0:10:29)], @Anonymous = '97908517' [Type: String (4000:0:0)] 14:54:00.336 [8] (null) DEBUG Bastian.Exacta.Business.Persistance.SessionFactory - Closing NHibernate session... 14:54:00.336 [8] (null) INFO Bastian.Exacta.AMAT.ImportAdapter.Wcf.AMATWcfImport - No order lines found for order 47708597 for order line cancellation request, cannot proceed with cancellation transaction. 14:54:00.352 [8] (null) WARN Bastian.Exacta.AMAT.ImportAdapter.Wcf.AMATWcfImport - Exacta Event <ORDER CANCEL="N" ORDER_NAME="47708600" TYPE="2"> <DETAIL TYPE="1005" /> <TRAILER_STOP>0</TRAILER_STOP> <ORDER_PRIORITY>1</ORDER_PRIORITY> <ORDER_LINE CANCEL="N" LINE_NUM="1"> <PROD_NAME>0010-01283</PROD_NAME> <PROD_COMPANY_NAME>4070</PROD_COMPANY_NAME> <PROD_VENDOR_NAME>4070</PROD_VENDOR_NAME> <QTY_REQUESTED>1</QTY_REQUESTED> <DETAIL TYPE="1000" VALUE="97908520" /> <DETAIL TYPE="1001" VALUE="1" /> <?xml version="1.0" encoding="utf-16" standalone="yes"?> <ORDER CANCEL="N" ORDER_NAME="47708563" TYPE="1"> <DETAIL TYPE="1000" VALUE="" /> <DETAIL TYPE="1001" VALUE="90000086570010-01283" /> <DETAIL TYPE="1002" VALUE="1" /> <DETAIL TYPE="1003" VALUE="1" /> <DETAIL TYPE="1004" VALUE="ZCON" /> <TRAILER_STOP>0</TRAILER_STOP>   we want to ingest only those line to splunk indexer which matches with below mentioned four green highlighted lines. <ORDER CANCEL="N" ORDER_NAME="XXXXXXXX" TYPE="1"> <ORDER CANCEL="N" ORDER_NAME="XXXXXXXX" TYPE="2"> Creating order cancellation transaction for order XXXXXXXX, JSON received for product import: {"records":[{"lgnum":"407","entitled":"XXXX","owner":"XXXX","product":"XXXX-XXXXX", Let me know how we can ingest only green highlighted matched lines to splunk indexer as single event.   Thanks Abhineet Kumar  
Hi Splunkers!    I need to extract the specific field which dosent consists of sourcetype in logs, Is there is anything apart from Regex and Spath, i also welcomed, answers in different is needed... See more...
Hi Splunkers!    I need to extract the specific field which dosent consists of sourcetype in logs, Is there is anything apart from Regex and Spath, i also welcomed, answers in different is needed. Fields to extrcat - OS, OSRelease Thanks in Advance, Manoj Kumar S
Hi Team, I wanted to know what the default retention period of buckets in Splunk i.e. (HOT, WARM, COLD, FROZEN, THAWED). How can I know the retention period of each bucket and where can check the... See more...
Hi Team, I wanted to know what the default retention period of buckets in Splunk i.e. (HOT, WARM, COLD, FROZEN, THAWED). How can I know the retention period of each bucket and where can check the retention period of each bucket? please could you help me with the location or path of each bucket's configurations in Splunk. Actually, I'm new to these bucket concepts. we have only 2 indexers ,1 license master and 1 search head. Thanks, Praseeda.
Aqua reports about 50 CVEs on metrics-1.2.2/fluentd-aggr-1.3.2 images.  How frequently these images are remediated? What ETA for resolving those vulns.? 
We are using splunk for Jenkins logs so events are coming to splunk ,how to combine one index data with another index, can you give me proper search query and answer.
How to change the architecture from a single indexer to an indexer cluster with indexer management? I need an overview of what configuration files need to be changed to change the architecture from s... See more...
How to change the architecture from a single indexer to an indexer cluster with indexer management? I need an overview of what configuration files need to be changed to change the architecture from single to multiple Splunk indexers. This shall also include an indexer manager server.
When attaching file to some SOAR actions E.g. send e-mail  OR update ticket in Jira I am getting the following error though Vault ID is valid Error occurred while updating the ticket. Failed to add... See more...
When attaching file to some SOAR actions E.g. send e-mail  OR update ticket in Jira I am getting the following error though Vault ID is valid Error occurred while updating the ticket. Failed to add attachment. Error message: Could not find specified vault ID in vault Action