All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | eval message = case(like(msg,"%Auto Approved%"), "Auto Approved", like(msg,"%... See more...
index = pcf_logs cf_org_name = creorg OR cf_org_name = SvcITDnFAppsOrg cf_app_name=VerifyReviewConsumerService host="*" | eval message = case(like(msg,"%Auto Approved%"), "Auto Approved", like(msg,"%Auto Rejected%"), "Auto Rejected",1=1,msg)|stats sum(Count) as Count by message | table message Count   I am having msg in event which contains Auto Approve or  Auto Rejected in between a big sentence I want to count auto approve and auto rejected events but it doesn't give the expected result.    
How can  I integrate Trend micro apex one with Splunk Enterprise?
I would like to retrieve the data in /var/log as correctly as possible. Currently I am simply monitoring the entire /var/log folder with no pre-selected source type. On the List of pretrained sourc... See more...
I would like to retrieve the data in /var/log as correctly as possible. Currently I am simply monitoring the entire /var/log folder with no pre-selected source type. On the List of pretrained source types I see a few callouts for log files such as syslog but the majority of log files are not present in this list. Perhaps some of these types can be used elsewhere though? For example, I see the linux_messages_syslog pretrained type refers to logs in /var/log/messages and since syslog != messages I presume this type may be useful on other files as well?  So I can use the few pretrained source types and then do I need to make my own source types for all the other log files?  Is there any repository with user created source types? I have to imagine most log file types have had source types created for them by now? Or do people just not apply source types and simply search on the unstructured data?
Hi Team, I am trying to fetch the count and percentage of hosts having success and failures along with failure percentage. host = Server1 Server2 Server3 Server4 Server5 But if count of spc... See more...
Hi Team, I am trying to fetch the count and percentage of hosts having success and failures along with failure percentage. host = Server1 Server2 Server3 Server4 Server5 But if count of spcecific host is having no event, I want to show that as well even with result =0. I am running below query but it is missing some servers because there are no events on specific sevrers form last 2 hours. index=server_list host IN (Server1,Server2,Server3,Server4,Server5) events_status = "*" | eval pass=if(like(event_status,"20%"),1,0) | eval fail=if(!like(event_status,"20%"),1,0) | stats count as Overall_Volume,sum(pass) as Passed, sum(fail) as Failed by host | eval Failure_Rate=round(Failed_Requests/(Passed_Requests+Failed_Requests)*100,2) | fillnull value=0 Below result I am getting, because Server4 and Server5 dont have any traffic from last 2 Hours - host          Overall_Volume Passed Failed Failure_Rate Server1       2                              1                1            50 Server2      10                             6               4            40 Server3       1                               0               1            100   Can anyone help in query so that I should get all Servers with values as 0 if no traffic.      
Hello I have read the Splunk documentation regarding the subsearches https://docs.splunk.com/Documentation/Splunk/8.2.2/Search/Aboutsubsearches  There is 2 things I don't understand 1) Except if ... See more...
Hello I have read the Splunk documentation regarding the subsearches https://docs.splunk.com/Documentation/Splunk/8.2.2/Search/Aboutsubsearches  There is 2 things I don't understand 1) Except if I am mistaken but the subsearch below   sourcetype=syslog [search sourcetype=syslog earliest=-1h | top limit=1 host | fields + host]   provides the same result that if I dont use the subsearch   sourcetype=syslog earliest=-1h | top limit=1 host | fields + host   but the main difference is that if I use a subsearch I will just collect directly the good event while if I use the standard search, I am going to collect all the events with earliest = -1h and after I am going to display the related host with top limit=1 Is it correct? 2) The documentation says that the second reason to use a subsearh is to "Run a separate search and add the output to the first search using the append command" It means that we only can use the append command after brackets or is it also possible to use join, appendcols or appendpipe comand because I have already seen this! If it's possible what are the difference when we use append in a subsearch compared to join, appendcols or appendpipe? Thanks in advance
Good Morning, I have created a simple dashboard that returns information on specific errors.   The returned information are Time of event, Server, Sourcetype  and event.  I want to be able from the ... See more...
Good Morning, I have created a simple dashboard that returns information on specific errors.   The returned information are Time of event, Server, Sourcetype  and event.  I want to be able from the data returned, click on a value in the cell and it opens to that specific log information.  I feel like I am close but I am really new to Splunk. index=OH sourcetype=Buckeyes = Server1 source = /usr/local/ESP/Agent/spool/CDCDEV/MAIN/* "Exception text ORA00001: unique constraint" OR "fatal segments" OR "Maximum 91" OR "ORA01017: invalid username/password; logon denied" OR "Caused by: com.nf.nexus.batchx.workflow.WorkflowException: Error!!! job with name" OR "Unable to locate work flow implementation for es.link.batchx.nfs.control.processtask.monitor.recover.WorkflowImpl" OR "SEC049: Invalid Logon Attempt|Incorrect Password." dedup host, source |fields - _serial, _bkt, _cd, _indextime, _si, _subsecond, splunk_server, tag::host, _sourcetype |fields + _time, _raw, host, source | convert timeformat="%m-%d-%Y %l:%M:%S %p" ctime(_time) | eval _time=(substr(_time, 0, 11) + substr(_raw, 0, 8)) | eval source=substr(source,40) | rename _raw AS "Event", source AS "ESP Job", _time AS "Time", host AS "Server"  
Hello All, Been trying to get the hang of syntax within Splunk and have been able to sus out a basic understanding, true to form for myself, I usually end up jumping into the deep end when I do thin... See more...
Hello All, Been trying to get the hang of syntax within Splunk and have been able to sus out a basic understanding, true to form for myself, I usually end up jumping into the deep end when I do things, so bear with me. I am attempting to creat a report/search/dashboard that looks over the last four hours and will display the largest percent increase of a value. The field is BIN currently stored as a numerical value, I have tried the tostring command to transform it but usually ends up as no values being returned or them all being grouped together. But I digress, how would I first create a search/table view that would be updated along a described timeframe lets say every hour where it looks at the previous timeframe as a percentage of total records for that timeframe and calculates the percentage increase of the two timeframes and filters to see the top 20 increases? Example: I would want to ignore any decreasing values and possibly only see the top 20 that had increased that are greater than or equal to a 15% increase.   BIN Percent 2 hour ago Percent 1 hr ago Pecent change 123456 10% 12% 16.7% 234561 10% 8% -25% 345612 30% 25% -20% 456123 35% 30% -16.7% 561234 15% 25% 40%    
I've installed the otel collector for collecting Infrastructure monitoring data from Linux and Windows machines. I'm not able to get the values for IP address, MAC address, serial number for those m... See more...
I've installed the otel collector for collecting Infrastructure monitoring data from Linux and Windows machines. I'm not able to get the values for IP address, MAC address, serial number for those machines on the cloud. I checked by going on UI. I can see OS information but I also need either one of those three fields to be synced up on the cloud. Is there a way to sync up network data in infrastructure monitoring?
Hi, I wrote the following query to identify searches running in verbose mode but it seems to be inducing reports that have been deleted. Is there a field I can use to exclude them? I have had a look... See more...
Hi, I wrote the following query to identify searches running in verbose mode but it seems to be inducing reports that have been deleted. Is there a field I can use to exclude them? I have had a look but nothing obvious comes to me.   | rest /servicesNS/-/-/saved/searches | search alert_type = "always" NOT title="instrumentation.*" | table eai:acl.owner title description *mode* is_scheduled | search display.page.search.mode=verbose   Thanks
How do I add the two values from stats which I get from these query?
.
Hi, I hav a table below to which I want to add the gridlines. It is controlled by the css code below:  Can anybody suggest?     table td > a,table td > a:hover,header{ color:#fff } /* [47... See more...
Hi, I hav a table below to which I want to add the gridlines. It is controlled by the css code below:  Can anybody suggest?     table td > a,table td > a:hover,header{ color:#fff } /* [47] */ .table-chrome,.table-chrome .sorts a,.table-chrome .sorts a:hover{ color:white!important; border:2px solid black; } /* [48] */ .table-chrome > thead > tr > th{ background-image:none; background-color:transparent!important; border:2px solid black; font-size: medium; text-align:-webkit-auto; } /* background colour for tables: Status volume & alert breaches */ /* [49] */ .table-chrome.table-striped > tbody > tr.odd > td,.table-chrome.table-striped > tbody > tr.even > td{ // background-color:#393a4b; padding-top:1px; border-color:transparent; text-align:auto; } /* [50] */ tr.shared-resultstable-resultstablerow.odd, tr.shared-resultstable-resultstablerow.even { background: none; }  
I want to create  a tile visualization which takes my search and then gives me the % of non 200 results from the "Response" field. Has anybody done this before?  
So this is what my data looks like. I need to check if the last column value is in the range of last 75 days. In other words, the date is later than last 75 days. How can i proceed?
Hi everybody I have a couple of questions regarding the compatibility between Splunk Enterprise Server and Universal Forwarders. This is mainly about the Mac OSX clients, which as everyone knows ar... See more...
Hi everybody I have a couple of questions regarding the compatibility between Splunk Enterprise Server and Universal Forwarders. This is mainly about the Mac OSX clients, which as everyone knows are always installed with the last OSX when ordering. Our servers are still on Enterprise Release 8.0.9, but we can't install that version on the Mac OSX 11 clients.   According to Splunk, Universal Forwarder 8.0.9 is not in the compatibility list of OSX 11 clients (only OSX 10.13, 10.14, 10.15). That means you have to install the 8.2.x version. From then on the OSX 11 appears as compatible.   This means that the Universal Forwarder has a higher version installed than the server.However, based on the compatibility list from Splunk (https://docs.splunk.com/Documentation/Forwarder/8.0.9/Forwarder/Compatibilitybetweenforwardersandindexers), compatibility would be given.   Can we install Universal Forwarder 8.0.9 on OSX-11 clients without having problems? Or can we install the Universal Forwarder 8.2.1 on the OSX-11 clients without having problems with our Enterprise servers which are still on version 8.0.9? Many thanks for your hint's
Hi I try to list the advantages of macro usage in Splunk As far as I know the main usage is if the name of the index or of the sourcetype change, we just have to change the macro But is there othe... See more...
Hi I try to list the advantages of macro usage in Splunk As far as I know the main usage is if the name of the index or of the sourcetype change, we just have to change the macro But is there other benefits of using a macro? For example, a macro is it faster? Thanks
Hi, I am trying to setup latest version Splunk Forwarder first time on a linux server. However, after exeuting below command, I am geeting errors. Please suggest what could be the issue here?   /s... See more...
Hi, I am trying to setup latest version Splunk Forwarder first time on a linux server. However, after exeuting below command, I am geeting errors. Please suggest what could be the issue here?   /splunk/splunkforwarder/bin> ./splunk start --accept-license This appears to be your first time running this version of Splunk. Splunk software must create an administrator account during startup. Otherwise, you cannot log in. Create credentials for the administrator account. Characters do not appear on the screen when you type in credentials. Please enter an administrator username: administrator Password must contain at least: * 8 total printable ASCII character(s). Please enter a new password: Please confirm new password: Splunk> Like an F-18, bro. Checking prerequisites... Checking mgmt port [8089]: open Creating: /splunk/splunkforwarder/var/lib/splunk Creating: /splunk/splunkforwarder/var/run/splunk Creating: /splunk/splunkforwarder/var/run/splunk/appserver/i18n Creating: /splunk/splunkforwarder/var/run/splunk/appserver/modules/static/css Creating: /splunk/splunkforwarder/var/run/splunk/upload Creating: /splunk/splunkforwarder/var/spool/splunk Creating: /splunk/splunkforwarder/var/spool/dirmoncache Creating: /splunk/splunkforwarder/var/lib/splunk/authDb Creating: /splunk/splunkforwarder/var/lib/splunk/hashDb ERROR: pid 27238 terminated with signal 11 SSL certificate generation failed.     Reffering to this link: https://splunkcommunities.force.com/customers/apex/ArticleDetailPage?URLName=Splunk-Won-t-Start-ERROR-SSL-Certificate-Generation-Failed Resolution is :: The main reason of this issue/error is because of an app of the Operating System (OS) named CylancePROTECT, it won’t let Splunk to create the certificates, so Splunk won’t be able to startCustomer will be able to start up Splunk after disabling Cylance.   Is there a way around this that we dont have to disable the security software?
Hi All, i'm struggling with the syslog configuration to forward events and maintain the original source IP. By rsyslog daemon i collect the data in a file then i need to forward  after parsing to a... See more...
Hi All, i'm struggling with the syslog configuration to forward events and maintain the original source IP. By rsyslog daemon i collect the data in a file then i need to forward  after parsing to a third syslog receiver. On my HF i have the following configuration: inputs.conf [monitor:///opt/syslog/udp_514/udp_switch.log] disabled = 0 sourcetype = syslog   outputs.conf [syslog:forward_syslog] server = 172.18.0.32:514   props.conf [source::/opt/syslog/udp_514/udp_switch.log] TRANSFORMS-t1 = to_syslog,to_null   transforms.conf [to_syslog] REGEX = <regex filter> DEST_KEY = _SYSLOG_ROUTING FORMAT = forward_syslog [to_null] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = nullQueue   this configuration is working fine, unfortunately the source ip is changed log in udp_switch.log "Sep 8 11:30:52 10.10.10.5 TEST5,007251000106157" in third party syslog the ip changes from "10.10.10.5" with the Heavy forwarder one. Is it possible to maintain the original IP, and how ? Many thanks    
Hi, Splunk logs are truncated to 10,000 characters. Please let me know TRUNCATE=20,000 need to change in Splunk installed location or forwarder installation location . Regards, Madhusri R
How to convert the alphnumeric values to numeric values, each time the length of the values changes.   Can someone suggest?   ClusterCPUUsed=31684 ClusterCPUHALimit=383880 HARamGBLimit=589 Clu... See more...
How to convert the alphnumeric values to numeric values, each time the length of the values changes.   Can someone suggest?   ClusterCPUUsed=31684 ClusterCPUHALimit=383880 HARamGBLimit=589 ClusterMemUsedGB=201