All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers,   I have the following raw event.It was parsing with correct date and time until the daylight saving started but after march 13th(daylight saving started) I see one hour mismatch... See more...
Hello Splunkers,   I have the following raw event.It was parsing with correct date and time until the daylight saving started but after march 13th(daylight saving started) I see one hour mismatch..what changes should I make on props.conf to show the correct time?   3/13/22 11:59:59.989 PM   2022-03-13 22:59:59,989 |v144031v~212657|*** conn[SSL/TLS]=103 CLIENT(1.1.2.2:23) disconnected. Thanks in Advance
Hi, For the standard "predict" function in Splunk, what are the options to access the ACCURACY of the predictions?  Thanks, Patrick
Hello, I have configured a custom indexed field via transforms.conf and props.conf as following: transforms.conf:  (/apps/search/local/) [EventID] FORMAT = EventID::$1 REGEX = <regex expressi... See more...
Hello, I have configured a custom indexed field via transforms.conf and props.conf as following: transforms.conf:  (/apps/search/local/) [EventID] FORMAT = EventID::$1 REGEX = <regex expression> WRITE_META = true   props.conf: (/apps/search/local)   [<sourcetype>] DATETIME_CONFIG =  NO_BINARY_CHECK = true category = custom pulldown_type = 1 LINE_BREAKER = ([\r\n]+) TRANSFORMS-EventID = EventID   fields.conf (etc/system/local) [sourcetype::<sourcetype>::EventID] INDEXED = True   The field EventID is getting indexed, I have checked it via   | walklex index="<index-name>" type=field | search NOT field=" *" | stats values(field)   The field will also show up at the sidebar when searching in smart mode, but not when searching in fast mode. Is there any way to make it show up in fast mode too? I assumed this woulde have been done by the fields.conf Stanza, but it seems not to work for me.  
Hi Splunk Community, I have 2 tables I am attempting to merge together. Both tables are in csvs that I am trying to pull from. Does anyone know the command so that the data from the second table ge... See more...
Hi Splunk Community, I have 2 tables I am attempting to merge together. Both tables are in csvs that I am trying to pull from. Does anyone know the command so that the data from the second table gets added to the bottom of the first? table 1                                           table 2 a1                                                    d4  b1                                                    e5 c3                                                     f6 Combined a1 b2 c3 d4 e5 f6                                    
Hi, I have a dashboard where I have a panel and ONLY if the user clicks on a row of this panel, does another panel pops up on the same same dashboard. The field that connects both panels is "SESSI... See more...
Hi, I have a dashboard where I have a panel and ONLY if the user clicks on a row of this panel, does another panel pops up on the same same dashboard. The field that connects both panels is "SESSION_UUID". The drilldown feature is not currently working though. Here is my XML code: <form theme="dark" script="tokenlist.js"> <row> <panel> <table> <search> <query> index=fraud_glassbox sourcetype="gb:sessions" | table SESSION_UUID Global_MCMID_CSH SESSION_TIMESTAMP COUNTRY CITY CLIENT_IP Global_EmailID_CSH </query> </search> <option name="drilldown">cell</option> <drilldown> <set token="tokComponent">$row.component$</set> </drilldown> </table> </panel> </row> <row depends="$tokComponent$"> <panel> <table> <search> <query> index=fraud_glassbox sourcetype="gb:hit" component="$tokComponent$ | table HEADER_REQUEST_REFERER,URL_PATH, SESSION_TIMESTAMP, username,CLIENT_IP, PACKET_IP </query> </search> </table> </panel> </row> </form> Can you please help as at the moment I am receiving the following error and the 2nd dashboard should not be appearing anyway unless the user clicks on a row in the 1st panel:   Thanks, Patrick
Hello Community! We have a particular set of searches that rely on a lookup against a managed lookup (adhock).  The lookup is 2 columns, Username and Status.  Currently, we update this list manual... See more...
Hello Community! We have a particular set of searches that rely on a lookup against a managed lookup (adhock).  The lookup is 2 columns, Username and Status.  Currently, we update this list manually every day by going in to content management, searching for the file, and then adding and deleting entries. This was ok to start, but now the list is getting unmanageable. What we would like to do, ideally, is take a local CSV and upload it over top of the one that exists via a PoweShell script that will be run on a local machine.  If that is not an option, I would be willing to have a script that creates a search to update the managed lookup that can be copied and pasted into a search. looking for suggestions and ideas.  Thanks in advance.      
We have a distributed search environment, with 2 very old indexers (the original servers) and 3 new indexers in a cluster.  The old indexers have been removed from the destination lists in outputs.... See more...
We have a distributed search environment, with 2 very old indexers (the original servers) and 3 new indexers in a cluster.  The old indexers have been removed from the destination lists in outputs.conf nearly everywhere, and most of the data is between 5 and 6 months old, except for internal indexes. I can't find what my next steps are to prep these servers for retirement, such as force-freezing the buckets they still hold, etc.  Suggestions? Thanks.
There are a lot of security alerts for "Powershell DownloadString" for Chocolatey installer. Is there a way to whitelist that alert keeping in mind that there was a recent attack - "Serpent Backdoor ... See more...
There are a lot of security alerts for "Powershell DownloadString" for Chocolatey installer. Is there a way to whitelist that alert keeping in mind that there was a recent attack - "Serpent Backdoor Slithers into Orgs Using Chocolatey Installer".  Ref links: https://threatpost.com/serpent-backdoor-chocolatey-installer/179027/ https://www.bleepingcomputer.com/news/security/serpent-malware-campaign-abuses-chocolatey-windows-package-manager/ 
Hi Team, Because the data storage time of Splunk is limited, we have a scheduled task to export data from Splunk to AWS S3 through Splunk SDK. SDK output mode: JSON SPL:     search index=... See more...
Hi Team, Because the data storage time of Splunk is limited, we have a scheduled task to export data from Splunk to AWS S3 through Splunk SDK. SDK output mode: JSON SPL:     search index=dput | fields - _raw date_* _cd _kv _bkt _si splunk_server punct timeendpos exectime index lang | table *     But recently I encountered a problem. When I batch query data within 10 minutes (about 400000 logs), I found that some logs will lose some fields, such as: raw data:      "2022-03-01T20:47:04.435Z [XNIO-1 task-16] INFO c.m.assertservice.service.impl.NotebookServiceImpl env=\"PROD\" hostname=\"\" client_ip=\"\" service_name=\"assetservice\" service_version=\"release-1.12.0\" request_id=\"98ad59ad-e973-4258-b559-a5c82476f14d\" event_type=\"read\" event_status=\"success\" event_severity=\"low\" notebook_topics=\"[Manager Research]\" object_type=\"Notebook\" object_id=\"6bcb4ad5-596c-4738-90b9-4bdff9515f12\" component=\"\" event_id=\"98ad59ad-e973-4258-b559-a5c82476f14d\" application=\"\" user_id=\"\" notebook_title=\"Portfolio Manager Performance History\" action=\"GET\" details=\"Get a notebook,title:Portfolio Manager Performance History, type:[LIBRARY]\" eventtype=\"usage\" timestamp=\"2022-03-01T20:47:04.435348Z\" application_area=\"NONE\" event_description=\"Get Notebook By Id UsageTracking\""      search result:     { "_indextime": "1646167627", "_sourcetype": "dput_usage", "_subsecond": ".435", "_time": "2022-03-01T14:47:04.435-06:00", "action": "GET", "application": "", "application_area": "NONE", "component": "", "details": "Get a notebook,title:Portfolio Manager Performance History, type:[LIBRARY]", "env": "PROD", "event_id": "98ad59ad-e973-4258-b559-a5c82476f14d", "event_length": "899", "event_status": "success", "eventtype": "usage", "extracted_sourcetype": "dput_usage", "host": "", "hostname": "", "linecount": "1", "object_id": "6bcb4ad5-596c-4738-90b9-4bdff9515f12", "object_type": "Notebook", "source": "", "sourcetype": "dput_usage", "timestamp": "2022-03-01T20:47:04.435348Z", "timestartpos": "0", "user_id": "" }     You can see that the fields owned by raw data such as notebook_title, notebook_topics do not appear in the search result.  (I also seem to have this problem exporting JSON on the Web UI.) This happens when I query a lot of data at the same time. But when I go to query this log alone and return it through the SDK, this problem does not occur, it returns all the results:     { "_indextime": "1646167627", "_sourcetype": "dput_usage", "_subsecond": ".435", "_time": "2022-03-01T14:47:04.435-06:00", "action": "GET", "application": "", "application_area": "NONE", "client_ip": "", "component": "", "details": "Get a notebook,title:Portfolio Manager Performance History, type:[LIBRARY]", "env": "PROD", "event_description": "Get Notebook By Id UsageTracking", "event_id": "98ad59ad-e973-4258-b559-a5c82476f14d", "event_length": "899", "event_severity": "low", "event_status": "success", "event_type": "read", "eventtype": "usage", "extracted_sourcetype": "dput_usage", "host": "", "hostname": "", "linecount": "1", "notebook_title": "Portfolio Manager Performance History", "notebook_topics": "[Manager Research]", "object_id": "6bcb4ad5-596c-4738-90b9-4bdff9515f12", "object_type": "Notebook", "request_id": "98ad59ad-e973-4258-b559-a5c82476f14d", "service_name": "assetservice", "service_version": "release-1.12.0", "source": "", "sourcetype": "dput_usage", "timestamp": "2022-03-01T20:47:04.435348Z", "timestartpos": "0", "user_id": "" }     The Java SDK version I am using is 1.8.0 and the C# SDK is 2.2.9 Can anyone answer my doubts? Thanks a lot!
Hello Team, We are using AppD SaaS to monitor our servers infra metrics. We noticed one of the windows servers went down on 13th April 2022 but when we checked now (18th April 2022) we are unable to... See more...
Hello Team, We are using AppD SaaS to monitor our servers infra metrics. We noticed one of the windows servers went down on 13th April 2022 but when we checked now (18th April 2022) we are unable to see the entry in console. Could you please help me whether the entry will be disappeared from console if the server not reported to AppD for any certain period? If yes can you share some details about it? Thanks, Selvaganesh E
Hello, dears Splunkers, I'm facing a problem when trying to run a query on Splunk DB Connect to mssql database, I'm connecting to MS SQL server using a connection type: MS-SQL Server Using MS Gene... See more...
Hello, dears Splunkers, I'm facing a problem when trying to run a query on Splunk DB Connect to mssql database, I'm connecting to MS SQL server using a connection type: MS-SQL Server Using MS Generic Driver  I get this Exception:  com.microsoft.sqlserver.jdbc.SQLServerException: The "variant" data type is not supported. No results found. my Splunk DB Connect is on version: 3.8.0 & Splunk DBX Add-on for Microsoft SQL Server JDBC is on version: 1.1.0
Hello Team, Few of our HF was configured to sent logs to syslog ng - local server for logs storage. After upgrade the certification on those forwarders, logs stop coming into Splunk. Its working fi... See more...
Hello Team, Few of our HF was configured to sent logs to syslog ng - local server for logs storage. After upgrade the certification on those forwarders, logs stop coming into Splunk. Its working fine on forwarders that not configured to sending data to syslog ng.  We tried to remove the syslog ng config from the HF setting but still no data coming in. Any idea/thought on this? Maybe anyone had similar issue previously.  Is there any cert upgrade needed on the syslog ng server as well? Thank in advance.  Muhammad Murad
Hello I noticed that my frozen folder are not splitting up by indexes. Instead I have "$_index_name" at the root folder on the volume. this is my configuration:   [default] maxTotalDataSizeMB... See more...
Hello I noticed that my frozen folder are not splitting up by indexes. Instead I have "$_index_name" at the root folder on the volume. this is my configuration:   [default] maxTotalDataSizeMB = 1000000 frozenTimePeriodInSecs = 13824000 homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb tstatsHomePath = volume:hot/$_index_name/datamodel_summary summaryHomePath = volume:hot/$_index_name/summary thawedPath = $SPLUNK_DB/$_index_name/thaweddb coldToFrozenDir = /frozen/$_index_name/frozendb repFactor=auto   is there a way to fix it? Thank you
I want to live track of the license usage not from the rollover summary, I want host, current license usage, and index name.
Hi Team,   Could you please clarify my doubt on connectivity between Heavy forwarder and Universal Forwarder. I have 2 site, Heavy forwarder and universal forwarder on both site.  Do I need to co... See more...
Hi Team,   Could you please clarify my doubt on connectivity between Heavy forwarder and Universal Forwarder. I have 2 site, Heavy forwarder and universal forwarder on both site.  Do I need to connect  the heavy forwarder  on X site to universal forwarder on X site only  or do I need to connect HF on X site to both X and Y site UFs.    There will be connectivity between both sites. Heavy forwarder are not connected to each other. they will be pushing data to indexers which are clustered.
hello everyone,  i have one UF deployed in deployement server. in that uf i have a outputs.conf pointed to a IDX, now i want to remove that IDX ip and add a new IDX ip. to do that i just simply wen... See more...
hello everyone,  i have one UF deployed in deployement server. in that uf i have a outputs.conf pointed to a IDX, now i want to remove that IDX ip and add a new IDX ip. to do that i just simply went to that outputs.conf and delete that old IDX ip and added the new IDX ip. but still not getting any logs.   So, please tell me how to overwrite the outputs.conf settings and distribute from the deployment server.
Hi, I am looking for a search command for generating a typical graph with multiple fields as below. CSV File has the following data. IPAddress Severity 192.168.1.4 Low 192... See more...
Hi, I am looking for a search command for generating a typical graph with multiple fields as below. CSV File has the following data. IPAddress Severity 192.168.1.4 Low 192.168.1.5 High 192.168.1.6 Medium 192.168.1.4 High 192.168.1.4 Medium 192.168.1.5 Low 192.168.1.5 Low 192.168.1.6 High 192.168.1.6 Low   Looking to see the data in splunk visualization similar to the following graph. The graph is plotted using excel for the above csv table.  I am looking for a search command to visualize the data similar to the above graph.  Appreciate your inputs. ~Arjun      
Hi, I have problem here, i already complete file transferring to Splunk server using cronjob. But unfortunately, all transfer's files cannot be reach by Splunk. Need assistance on this
Hi, Can we integrate HSM(Key operation) logs with Splunk. please advise  HSM model - payShield 9000 and payShield 10000
We have got below vulnerabilities on Splunk servers, please help how to resolve it insecure cipher suites: * TLS 1.2 ciphers: * TLS_RSA_WITH_AES_128_CBC_SHA256 * TLS_RSA_WITH_AES_128_GCM_SHA256 ... See more...
We have got below vulnerabilities on Splunk servers, please help how to resolve it insecure cipher suites: * TLS 1.2 ciphers: * TLS_RSA_WITH_AES_128_CBC_SHA256 * TLS_RSA_WITH_AES_128_GCM_SHA256 * TLS_RSA_WITH_AES_256_GCM_SHA384