All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK. Looks relatively good. Try to run | rest splunk_server=<your_cluster_manager> /services/cluster/manager/peers | table label site status from your MC
What do you mean by "I integrated my UF with Splunk"? Also the usual questions. 1. Do you have _any_ events from this forwarder (especially forwarder's own logs in _internal index) in your Splunk? ... See more...
What do you mean by "I integrated my UF with Splunk"? Also the usual questions. 1. Do you have _any_ events from this forwarder (especially forwarder's own logs in _internal index) in your Splunk? 2. Do you have connectivity from your UF to your receiving component(s)? Did you verify it manually? 3. Did you check your forwarder's logs ($SPLUNK_HOME/var/log/splunk/splunkd.log) for errors?
While there is no explicitly stated mpreview introduction in the 8.1.0 changelog, the command itself shows in the 8.1.0 version of the docs which suggests it wasn't available in previous versions and... See more...
While there is no explicitly stated mpreview introduction in the 8.1.0 changelog, the command itself shows in the 8.1.0 version of the docs which suggests it wasn't available in previous versions and was introduced in 8.1
  splunk btool server list clustering | grep factor   Hi thanks, here is the output : etc/system/default/servers.conf >>> ack_factor = 0 etc/apps/MULTI_SITE_APP/local/server.conf >>> replication... See more...
  splunk btool server list clustering | grep factor   Hi thanks, here is the output : etc/system/default/servers.conf >>> ack_factor = 0 etc/apps/MULTI_SITE_APP/local/server.conf >>> replication factor = 2 etc/apps/MULTI_SITE_APP/local/server.conf >>>search_factor = 2 etc/apps/MULTI_SITE_APP/local/server.conf >>>site_replication_factor = origin:1, site1:1, site2:1 total 2 etc/apps/MULTI_SITE_APP/local/server.conf >>>site_search_factor = origin:1, site1:1, site2:1 total 2 etc/system/default/server.conf >>> replication factor = 3 Regards,
I have enabled sysevent and sys_audit in splunk instance. But I am not receiving any logs from the servicenow. Do I have to configure anything on servicenow  
Can you please say, what do we need to enable on servicenow instance to send it's log to splunk. I enabled sysevent and syslog_audit on splunk instance. But not receiving logs. What to do? 
I installed Universal Forwarder On Linux Machine and integrate it with Splunk , but their is no logs returned on Splunk Search Head ,  as per your Knowledge I`m currently working on distributed Splun... See more...
I installed Universal Forwarder On Linux Machine and integrate it with Splunk , but their is no logs returned on Splunk Search Head ,  as per your Knowledge I`m currently working on distributed Splunk Enterprise .   Any Recommendations ?
Hi @SplunkySplunk , as @inventsekar said, these are thre ways to accelerate searches that runs in a different way and that re to use in different conditions. e.g. I used report acceleration when I ... See more...
Hi @SplunkySplunk , as @inventsekar said, these are thre ways to accelerate searches that runs in a different way and that re to use in different conditions. e.g. I used report acceleration when I had a dashboard with many real time searches, used by many users: I created an accelerated report that was visualized in the dashboard, in this way I had  a near real time dashboard used by many users, that runned only one search. Data Models, are the most efficient solution if you have to search only using predefined fields. Summary indexes are very useful when you want to reduce and structure your logs: e.. if you have the logs from a fireawll (that usually are very many and with many fields not always used!), you can reduce the logs and use the reducted logs for your searches, also on raw (reducted) logs. As me and @inventsekar said, it depends on what is your requirement. Ciao. Giuseppe
Hi team, Could you please support on my request.
I did not found  and i am also looking for the solution.
I have updated the universal forwarder with RPM and deb packages and following commands: rpm -Uvh and dpkg -i
Hi i'm using splunk 8.0.4 and when i use mpreview, return Unknown search command 'mpreview'. Any idea? Thanks
Two questions. In addition to comparison of host values for the purpose of exclusion, is there any other information you want to extract from sysmon_index?  I will assume no. Is presence in sysmon... See more...
Two questions. In addition to comparison of host values for the purpose of exclusion, is there any other information you want to extract from sysmon_index?  I will assume no. Is presence in sysmon_index necessary and sufficient for exclusion?  I will assume yes. With these assumptions, you don't need stats.  The following subsearch should suffice: index=dhcp_source_index NOT [search index=sysmon_index | stats values(host) as host]
Possible solution would be a missing JRE installation. https://docs.splunk.com/Documentation/DBX/3.15.0/DeployDBX/Prerequisites#Java_Runtime_Environment_.28JRE.29_requirements Validate JRE is insta... See more...
Possible solution would be a missing JRE installation. https://docs.splunk.com/Documentation/DBX/3.15.0/DeployDBX/Prerequisites#Java_Runtime_Environment_.28JRE.29_requirements Validate JRE is installed: java --version If nothing is returned, then on Ubuntu: sudo apt install default-jre Restart Splunk.
The above response assumes Time is an absolute offset from Trigger Time and not the interval between samples. You can also extract the date and time from the source file name using a custom datetime... See more...
The above response assumes Time is an absolute offset from Trigger Time and not the interval between samples. You can also extract the date and time from the source file name using a custom datetime.xml configuration, but INGEST_EVAL is easier to maintain.
Hi @kyokei, The "Trigger Time" line will be lost to subsequent events after it's either discarded as a header or broken into an event. If you have the ability to manipulate the source file name, you... See more...
Hi @kyokei, The "Trigger Time" line will be lost to subsequent events after it's either discarded as a header or broken into an event. If you have the ability to manipulate the source file name, you can add the fractional seconds value the file name and reference the source when extracting timestamps: AUTO_231126_012051_500_0329.CSV With that change made, you can, for example, combine INDEXED_EXTRACTIONS with TRANSFORMS and INGEST_EVAL to extract CSV fields and set _time for each event: # inputs.conf [monitor:///path/to/AUTO_*.CSV] index = main sourcetype = sensor_csv # props.conf [sensor_csv] # disable default timestamp extraction and suppress errors DATETIME_CONFIG = CURRENT # enable indexed extractions for CSV files INDEXED_EXTRACTIONS = CSV # use header line 12 for field names: #   "Time","U1-2[]","Event" # these will be "cleaned" by Splunk: #   Time #   U1_2 #   Event HEADER_FIELD_LINE_NUMBER = 12 # execute a transform to extract the _time value TRANSFORMS-sensor_csv_time = sensor_csv_time # transforms.conf [sensor_csv_time] INGEST_EVAL = _time:=strptime(replace(source, ".*(AUTO_\\d{6}_\\d{6}_\\d{3}).*", "\\1"), "AUTO_%y%m%d_%H%M%S_%N")+tonumber(coalesce(replace(_raw, "^(?!\")([^,]+),.*", "\\1"), 0)) ``` search ``` index=main sourcetype=sensor_csv | table _time source Time U1_2 Event _time source Time U1_2 Event 2023-11-26 01:20:52.500 AUTO_231126_012051_500_0329.CSV +1.000000000E+00 +3.16000E+00 0 2023-11-26 01:20:52.400 AUTO_231126_012051_500_0329.CSV +9.000000000E-01 +3.16500E+00 0 2023-11-26 01:20:52.300 AUTO_231126_012051_500_0329.CSV +8.000000000E-01 +3.19400E+00 0 2023-11-26 01:20:52.200 AUTO_231126_012051_500_0329.CSV +7.000000000E-01 +3.18400E+00 0 2023-11-26 01:20:52.100 AUTO_231126_012051_500_0329.CSV +6.000000000E-01 +3.17300E+00 0 2023-11-26 01:20:52.000 AUTO_231126_012051_500_0329.CSV +5.000000000E-01 +3.17300E+00 0 2023-11-26 01:20:51.900 AUTO_231126_012051_500_0329.CSV +4.000000000E-01 +3.19100E+00 0 2023-11-26 01:20:51.800 AUTO_231126_012051_500_0329.CSV +3.000000000E-01 +3.60100E+00 0 2023-11-26 01:20:51.700 AUTO_231126_012051_500_0329.CSV +2.000000000E-01 +7.93600E+00 0 2023-11-26 01:20:51.600 AUTO_231126_012051_500_0329.CSV +1.000000000E-01 +1.45180E+01 0 2023-11-26 01:20:51.500 AUTO_231126_012051_500_0329.CSV +0.000000000E+00 +2.90500E+00 0
@PickleRick @ITWhisperer   Thanks for the responses,  I tried both and they both give me the same result, but still not exactly what I had in mind in my head. Here's the result:   What I'm try... See more...
@PickleRick @ITWhisperer   Thanks for the responses,  I tried both and they both give me the same result, but still not exactly what I had in mind in my head. Here's the result:   What I'm trying to do is more along the lines of being able to get a count of products that sold at specific price points from transactions that may have multiple items purchased. Maybe what I'm trying to do isn't really possible or the best approach to the problem,  but the following table shows kinda what I'm trying to accomplish. products product_prices count(products) product_100 100 2 product_200 200 1 product_300 300 1 product_400 400 1
Yep. Check the output of splunk btool list server clustering | grep factor  
Hi and thanks for the reply. And what are your site RF/SF > can you be more spécific please ? In the server.conf in my CM ? (I will check that when back to work tomorow. For the sites details : 2 ... See more...
Hi and thanks for the reply. And what are your site RF/SF > can you be more spécific please ? In the server.conf in my CM ? (I will check that when back to work tomorow. For the sites details : 2 site with 18 indexers on each side. So 9 on one site and 8 + 1 decommissioned on the other site.  I get back to you tomorow morning. Regards,
Great. Could you please share what you have found. I would like to see it. Thanks