All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You could create own SHC cluster for HF without any normal SH activity from end user. There is no need to use regular end user SHC, just the functionality is what is needed!
Hi, also running into this issue frequently. Has anyone worked on this or found a solution? Thanks!
Hi @Nraj87 , HA is a continuous open issue for DB-Connect. The easiest solution, as @isoutamo hinted, is to install DB-Connect in a Search Head Cluster, so the Cluster gives you the requested HA fe... See more...
Hi @Nraj87 , HA is a continuous open issue for DB-Connect. The easiest solution, as @isoutamo hinted, is to install DB-Connect in a Search Head Cluster, so the Cluster gives you the requested HA features, but not all the customers want to thave an input processor (like DB-Connect) in front end systems accessed by all the users. I hope that Splunk will design a solution for this as soon as possible, for the moment, there's a request in ideas.splunk.com : https://ideas.splunk.com/ideas/EID-I-85 that you could vote. There are two problems to solve for the HA: the checkpoint, the input enablement. A not automatic workaround is to install DB-Connect on at least two HF and create a scheduled script that makes a KV-Store backup from the main HF and a restore in the secondary one, in this way you align the checkpoint between the two HFs, obviously in case or Disaster you'll have duplicated data for the period from the last alignment. Then you have to manually start the secondary DB-Connect in case of Disaster and stop it when the Disaster period is closed. It's a porkaround not a solution! waiting for the solution from Splunk that's late from many years (vote for it!). Ciao. Giuseppe
Hi @‌Easwar.C, Thank you for posting to the community. Is your jar file being placed under the path :  JRE_HOME>/lib/ext  as indicated in perquisite for Object instance Tracking? Also, When usi... See more...
Hi @‌Easwar.C, Thank you for posting to the community. Is your jar file being placed under the path :  JRE_HOME>/lib/ext  as indicated in perquisite for Object instance Tracking? Also, When using the JDK runtime environment, we need to set the classpath using the  -classpath option for the application. After these setting. restart the JVM. Moreover, it is worth to check whether the user that currently running the JVM has access to read the jar file. Cause this error could be triggered when permission denied. Hope this helps, Martina
Hi Team, I'm working on setting up a dashboard that includes the following EUM Browser metrics: Monthly Active Users Bounce Rate Session Duration Daily Average Active Users Could anyone provi... See more...
Hi Team, I'm working on setting up a dashboard that includes the following EUM Browser metrics: Monthly Active Users Bounce Rate Session Duration Daily Average Active Users Could anyone provide guidance on how to retrieve these metrics and display them on a dashboard? Best regards, Nivedita Kumari
Hi @Alnardo , which type of disks are you using for your Search Head and your Indexers? how many IOPS have your disks? remember that Splunk requires at least 800 IOPS and if you have more performa... See more...
Hi @Alnardo , which type of disks are you using for your Search Head and your Indexers? how many IOPS have your disks? remember that Splunk requires at least 800 IOPS and if you have more performat disks you'll have more performat searches. For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Capacity/Referencehardware Ciao. Giuseppe
Hi @Real_captain , the search seems to be correct and you should have results also for the present time, are you sure that you have data for the last day that match the conditions? Anyway, your sol... See more...
Hi @Real_captain , the search seems to be correct and you should have results also for the present time, are you sure that you have data for the last day that match the conditions? Anyway, your solution with append is subjected to the limit of 50,000 results because it's a subsearch. About the graph, you should be ableto plot a graph with your search, see in the Visualization tab or in a panel. Ciao. Giuseppe
Hi @JJE , I'm not interested on your logs, only to the timestamp format! Anyway, check if the timestamp format has the format I described and in this case use the TIME_FORMAT option in props.conf. ... See more...
Hi @JJE , I'm not interested on your logs, only to the timestamp format! Anyway, check if the timestamp format has the format I described and in this case use the TIME_FORMAT option in props.conf. Ciao. Giuseppe
Hi @pavithra , to answer to your question I need more information: filename, path, column separator, sourcetype, index. Anyway, supponing that the file is called "myfile2024-08-09.csv" and t... See more...
Hi @pavithra , to answer to your question I need more information: filename, path, column separator, sourcetype, index. Anyway, supponing that the file is called "myfile2024-08-09.csv" and that the path is "/opt/data/files", you could use these: inputs.conf [monitor:///opt/data/files/myfile*.csv] disabled = 0 index = your_index sourcetype = your_sourcetype host = your_host Then you should also configure props.conf for INDEXED_EXTRACTIONS = CSV. Ciao. Giuseppe  
Thanks @richgalloway  and @isoutamo  for your time , it worked
Hello, Can anyone help me in getting this error resolved ? 2024-08-09 10:50:00,282 DEBUG pid=8956 tid=MainThread file=connectionpool.py:_new_conn:1007 | Starting new HTTPS connection (5): cisco-man... See more...
Hello, Can anyone help me in getting this error resolved ? 2024-08-09 10:50:00,282 DEBUG pid=8956 tid=MainThread file=connectionpool.py:_new_conn:1007 | Starting new HTTPS connection (5): cisco-managed-ap-northeast-2.s3.ap-northeast-2.amazonaws.com:443 2024-08-09 10:50:00,312 DEBUG pid=8956 tid=MainThread file=endpoint.py:_do_get_response:205 | Exception received when sending HTTP request. Traceback (most recent call last): File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/connectionpool.py", line 710, in urlopen chunked=chunked, File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/connectionpool.py", line 386, in _make_request self._validate_conn(conn) File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/connectionpool.py", line 1042, in _validate_conn conn.connect() File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/connection.py", line 429, in connect tls_in_tls=tls_in_tls, File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/util/ssl_.py", line 450, in ssl_wrap_socket sock, context, tls_in_tls, server_hostname=server_hostname File "/splb001/splunk_fw_teams/etc/apps/TA-cisco-cloud-security-umbrella-addon/bin/ta_cisco_cloud_security_umbrella_addon/aob_py3/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) File "/splb001/splunk_fw_teams/lib/python3.7/ssl.py", line 423, in wrap_socket session=session File "/splb001/splunk_fw_teams/lib/python3.7/ssl.py", line 870, in _create self.do_handshake() File "/splb001/splunk_fw_teams/lib/python3.7/ssl.py", line 1139, in do_handshake self._sslobj.do_handshake() ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)
Hi All,   Please provide conf files ( inputs.conf,props.con,outputs.conf) to index the below format data on daily basis  
Hi Team  Is there any way to create Sankey style tile for a single value , below image explaing abt group value.   Where i would like to break into single like Account locked , Invalid Login in se... See more...
Hi Team  Is there any way to create Sankey style tile for a single value , below image explaing abt group value.   Where i would like to break into single like Account locked , Invalid Login in separate tile     
Hi  I have few question regarding dashboard studio is there any way to customise the shape menu ex - line  Can i rotate the image based on my design ( Dashboard Studio ) How or Where to find out ... See more...
Hi  I have few question regarding dashboard studio is there any way to customise the shape menu ex - line  Can i rotate the image based on my design ( Dashboard Studio ) How or Where to find out more shape image ( Dashboard Studio ) attached image below How to make a text appear like a shadow in this space  
Hi Alll I have created Map tile in the dashboard studio however the query is running with no issue but i cannot see the out and i am getting the same error message  for multiple map tile. The below... See more...
Hi Alll I have created Map tile in the dashboard studio however the query is running with no issue but i cannot see the out and i am getting the same error message  for multiple map tile. The below map layout are from dashboard studio  Marker Layer with Base Configurations Marker Layer with Dynamic Coloring Bubble Layer with Single Series Bubble Layer with Multiple Series Choropleth Layer World Choropleth Layer with hidden base layer   Code  - index=test "event.Properties.apikey"="*" "event.endpoint"="*" | iplocation event.Properties.ip | dedup event.Properties.ip | top limit=20 Country Output - Blank with data however no errror was triggered   
When importing Prometheus metric data into Splunk, the following error is output. (Importing is performed using 'Prometheus Metrics for Splunk') /opt/splunk/var/log/splunk/splunkd.log WARN Pipelin... See more...
When importing Prometheus metric data into Splunk, the following error is output. (Importing is performed using 'Prometheus Metrics for Splunk') /opt/splunk/var/log/splunk/splunkd.log WARN PipelineCpuUsageTracker [1627 parsing] - No indexkey available chan=source::prometheusrw:sourcetype::prometheusrw:host::splunk-hf-75869c4964-phm44 timetook=1 msec. WARN TcpOutputProc [9736 indexerPipe] - Pipeline data does not have indexKey. [_path] = /opt/splunk/etc/apps/modinput_prometheus/linux_x86_64/bin/prometheusrw\n[_raw] = \n[_meta] = punct::\n[_stmid] = 3CUUsSnja9PAAB.B\n[MetaData:Source] = source::prometheusrw\n[MetaData:Host] = host::splunk-hf-6448d7ffdb-ltzbr\n[MetaData:Sourcetype] = sourcetype::prometheusrw\n[_done] = _done\n[_linebreaker] = _linebreaker\n[_charSet] = UTF-8\n[_conf] = source::prometheusrw|host::splunk-hf-6448d7ffdb-ltzbr|prometheusrw|2\n[_channel] = 2\n Please tell me the cause of the error and how to deal with it.
I plan to develop a customize visualization. I edit a formatter.html  <form class="splunk-formatter-section" section-label="Data Series"> <splunk-control-group label="Data Type"> <splunk-select id=... See more...
I plan to develop a customize visualization. I edit a formatter.html  <form class="splunk-formatter-section" section-label="Data Series"> <splunk-control-group label="Data Type"> <splunk-select id="dataTypeSelect" name="{{VIZ_NAMESPACE}}.dataType" value="Custom"> <option value="Custom">Custom</option> <option value="XBar_R-X">XBar R - X</option> <option value="LineChart">LineChart</option> <option value="Pie">Pie</option> <option value="Gauge">Gauge</option> </splunk-select> </splunk-control-group> <splunk-control-group label="Option"> <splunk-text-area id="optionTextArea" name="{{VIZ_NAMESPACE}}.option" value="{}"> </splunk-text-area> </splunk-control-group>... I wish to change dataType, then textarea option have diffenent value to appear in format menu. Menu Option  have many choice, How to modify  visualization_source.js content to get this?
Hi, I recently tried creating a private app on Splunk Cloud, the app is getting created successfully, but it does not show nor display in the list of apps which are on the Splunk Cloud. I tried to ... See more...
Hi, I recently tried creating a private app on Splunk Cloud, the app is getting created successfully, but it does not show nor display in the list of apps which are on the Splunk Cloud. I tried to create the app using both barebones and sample_app as a template with different App IDs but it didn't work, however the app is getting created and there's no error being displayed for the same, also I kept the visibility as yes. Please can someone assist me on this? Thanks!
Values gives you an ordered set of unique values, try using the list aggregation function instead index=core_ct_report_* | eval brand=case(like(report_model, "cfg%"), "grandstream", like(report_mod... See more...
Values gives you an ordered set of unique values, try using the list aggregation function instead index=core_ct_report_* | eval brand=case(like(report_model, "cfg%"), "grandstream", like(report_model, "cisco%"), "Cisco", like(report_model, "ata%"), "Cisco", like(report_model, "snom%"), "Snom", like(report_model, "VISION%"), "Snom", like(report_model, "yealink%"), "Yealink", 1=1, "Other") |stats count by fw_version,report_model,brand | stats values(brand) as brand list(fw_version) as fw_version list(count) as count by report_model |table brand report_model fw_version count
1) Max 50k rows b) Will splitting the CSV work? It's unfortunate that you cannot change limits.conf.  Yes, splitting CSV will work.  If you don't need these CSVs as lookup, that's not a probl... See more...
1) Max 50k rows b) Will splitting the CSV work? It's unfortunate that you cannot change limits.conf.  Yes, splitting CSV will work.  If you don't need these CSVs as lookup, that's not a problem.  But if you still need a lookup, you will need to maintain two sets of CSVs, one for lookup, the rest for this purpose. (Alternatively, you can modify your searches to use multiple lookups.  At that point, you code can become unmaintainable.) 2) Join command This is where things become intensely interesting  I did not compare your statements with the actual depiction.  After reviewing your original description, I notice that your depiction (and illustration) is a left join of CSV on the left, with index search on the right.  In this regard, Splunk's join is working exactly as documented.     | inputlookup host.csv | join type=left ip_address [ search index=owner | rename ip as ip_address] | table host ip_address owner     Here is an emulation:     | makeresults format=csv data="ip_address, host 10.1.1.1, host1 10.1.1.2, host2 10.1.1.3, host3 10.1.1.4, host4" ``` the above emulates | inputlookup host.csv ``` | join type=left ip_address [makeresults format=csv data="ip, host, owner 10.1.1.3, host3, owner3 10.1.1.4, host4, owner4 10.1.1.5, host5, owner5" | eval index = "owner" ``` the above emulates index=owner ``` | rename ip as ip_address] | table host ip_address owner     The result is the same host ip_address owner host1 10.1.1.1   host2 10.1.1.2   host3 10.1.1.3 owner3 host4 10.1.1.4 owner4 I suspect that the "bad" output you observe is caused by the 50K row limit. (Try a smaller CSV and a smaller index search you should see.) In the solution you provided index will be treated as left data because it's specified first Unlike join, the append-stats method that many Splunkers use does not really depend on which set is introduced first.  The control is in the filter.