All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a simple .csv I ingest daily via a monitored file, my .csv has some fields in it that show dates/time, but they do NOT represent the time I want the event indexed at.  I want the _time to sho... See more...
I have a simple .csv I ingest daily via a monitored file, my .csv has some fields in it that show dates/time, but they do NOT represent the time I want the event indexed at.  I want the _time to show the time the .csv field was ingested and for Splunk to ignore the other fields in the .csv which have dates/time present.  I have created a new source type by cloning .csv and set the timestamp to use "current time", however, Splunk will still prefer to use random dates/times found in field values and only use "current time" when no fields contain any other time information. I can "fix" this by manually adding a time field in the .csv before ingesting, but I am trying to automate this process as much as possible. Is there a way I can force Splunk to ignore all date/time values found in a .csv and use ingest time for the _time value? Thank you in advance!
Hello!   We have some logs coming across which are in JSON and thus 'just work'. The problem is, inside the log field are the events we need to extract.  There are about 200 app's that will be log... See more...
Hello!   We have some logs coming across which are in JSON and thus 'just work'. The problem is, inside the log field are the events we need to extract.  There are about 200 app's that will be logging this way and each app will have different fields and values so doing a standard field extract won't work or would be 1000's of potential kvps. The format however is always the same.     { "log": "[2022-08-25 18:54:40.031] INFO JsonLogger [[MuleRuntime].uber.143312: [prd-ops-mulesoft].encryptFlow.BLOCKING @4ac358d5] [event: 25670349-e6b5-4996-9cb6-c4c9657cd9ba]: {\n \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\",\n \"message\" : \"MESSAGE_HERE\",\n \"tracePoint\" : \"END\",\n \"priority\" : \"INFO\",\n \"elapsed\" : 0,\n \"locationInfo\" : {\n \"lineInFile\" : \"95\",\n \"component\" : \"json-logger:logger\",\n \"fileName\" : \"buildLoggingAndResponse.xml\",\n \"rootContainer\" : \"responseStatus_success\"\n },\n \"timestamp\" : \"2022-08-25T18:54:40.030Z\",\n \"content\" : {\n \"ResponseStatus\" : {\n \"type\" : \"SUCCESS\",\n \"title\" : \"Encryption successful.\",\n \"status\" : \"200\",\n \"detail\" : { },\n \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\",\n \"apiMethodName\" : null,\n \"apiURL\" : \"https://app.com/prd-ops-mulesoft/encrypt/v1.0\",\n \"apiVersion\" : \"v1\",\n \"x-ConsumerRequestSentTimeStamp\" : \"\",\n \"apiRequestReceivedTimeStamp\" : \"2022-08-25T18:54:39.856Z\",\n \"apiResponseSentTimeStamp\" : \"2022-08-25T18:54:40.031Z\",\n \"userId\" : \"GID01350\",\n \"orchestrations\" : [ ]\n }\n },\n \"applicationName\" : \"ops-mulesoft\",\n \"applicationVersion\" : \"v1\",\n \"environment\" : \"PRD\",\n \"threadName\" : \"[MuleRuntime].uber.143312: [prd-ops-mulesoft].encryptFlow.BLOCKING @4ac358d5\"\n}\n", "stream": "stdout", "time": "2022-08-25T18:54:40.086450071Z", "kubernetes": { "pod_name": "prd-ops-mulesoft-94c49bdff-pcb5n", "namespace_name": "4cfa0f08-92b0-467b-9ca4-9e49083fd922", "pod_id": "e9046b5e-0d70-11ed-9db5-0050569b19f6", "labels": { "am-org-id": "01a4664d-9e16-454b-a14c-59548ef896b5", "app": "prd-ops-mulesoft", "environment": "4cfa0f08-92b0-467b-9ca4-9e49083fd922", "master-org-id": "01a4664d-9e16-454b-a14c-59548ef896b5", "organization": "01a4664d-9e16-454b-a14c-59548ef896b5", "pod-template-hash": "94c49bdff", "rtf.mulesoft.com/generation": "aab6b8074cf73151b1515de0e468478e", "rtf.mulesoft.com/id": "18d3e5d6-ce59-4837-9f3b-8aad3ccffcef", "type": "MuleApplication" }, "host": "1.1.1.1", "container_name": "app", "docker_id": "cf07f321aec551b200fb3f31f6f1c67b2678ff6f6a335d4ca41ec2565770513c", "container_hash": "rtf-runtime-registry.kprod.msap.io/mulesoft/poseidon-runtime-4.3.0@sha256:6cfeb965e0ff7671778bc53a54a05d8180d4522f0b1ef7bb25e674686b8c3b75", "container_image": "rtf-runtime-registry.kprod.msap.io/mulesoft/poseidon-runtime-4.3.0:20211222-2" } }       The JSON works fine, but the events we ALSO want extracted are in here      {\n \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\",\n \"message\" : \"MESSAGE_HERE\",\n \"tracePoint\" : \"END\",\n \"priority\" : \"INFO\",\n \"elapsed\" : 0,\n \"locationInfo\" : {\n \"lineInFile\" : \"95\",\n \"component\" : \"json-logger:logger\",\n \"fileName\" : \"buildLoggingAndResponse.xml\",\n \"rootContainer\" : \"responseStatus_success\"\n },\n \"timestamp\" : \"2022-08-25T18:54:40.030Z\",\n \"content\" : {\n \"ResponseStatus\" : {\n \"type\" : \"SUCCESS\",\n \"title\" : \"Encryption successful.\",\n \"status\" : \"200\",\n \"detail\" : { },\n \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\",\n \"apiMethodName\" : null,\n \"apiURL\" : \"https://app.com/prd-ops-mulesoft/encrypt/v1.0\",\n \"apiVersion\" : \"v1\",\n \"x-ConsumerRequestSentTimeStamp\" : \"\",\n \"apiRequestReceivedTimeStamp\" : \"2022-08-25T18:54:39.856Z\",\n \"apiResponseSentTimeStamp\" : \"2022-08-25T18:54:40.031Z\",\n \"userId\" : \"GID01350\",\n \"orchestrations\" : [ ]\n }\n },\n \"applicationName\" : \"ops-mulesoft\",\n \"applicationVersion\" : \"v1\",\n \"environment\" : \"PRD\",\n \"threadName\" : \"[MuleRuntime].uber.143312: [prd-ops-mulesoft].encryptFlow.BLOCKING @4ac358d5\"\n}     SPATH would work, but the JSON is fronted by this     "[2022-08-25 18:54:40.031] INFO JsonLogger [[MuleRuntime].uber.143312: [prd-ops-mulesoft].encryptFlow.BLOCKING @4ac358d5] [event: 25670349-e6b5-4996-9cb6-c4c9657cd9ba]:     So it doesn't treat it as JSON. This is one example, others events don't have the correlationID, etc.  We need a method that will take the raw data, parse it as JSON AND then dynamically extract the events in the log field in their KV pairs (e.g. \"correlationId\" : \"25670349-e6b5-4996-9cb6-c4c9657cd9ba\" == $1::$2) Can this be done in transforms using regex? Is this even possible, or do we ultimately need to create extractions based on every possible field?   Appreciate the guidance here! Thanks!!
We are currently tasked at having Splunk monitor an AKS in Azure, and comparing two solutions: - Installing Splunk Connect for Kubernetes in AKS, as per this thread: We are thinking of moving to Az... See more...
We are currently tasked at having Splunk monitor an AKS in Azure, and comparing two solutions: - Installing Splunk Connect for Kubernetes in AKS, as per this thread: We are thinking of moving to Azure Kontainer Servi... - Splunk Community - Another pattern that was done before is to enable Azure Monitor, which in turn ships logs to Event Hub and eventually consumed by Splunk via the Splunk Addon for Microsoft Cloud Services. How does the two solutions compare and what is the preferred solution? 
We have several devices that perform endpoint and network device scanning.  As intended, they are scanning prohibited ports to verify they are not open, however the ESCU correlation searches , specif... See more...
We have several devices that perform endpoint and network device scanning.  As intended, they are scanning prohibited ports to verify they are not open, however the ESCU correlation searches , specifically the "Prohibited network Traffic Allowed" rule, is detecting thousands of these events each day. How can I prevent notable events from being created in Enterprise Security when the source is one of the scanning devices? Thank you.
I'm trying to create a playbook that uses the Windows Remote Management app to take a file saved locally on a server and move it to a location on a network share. I've tried using different command a... See more...
I'm trying to create a playbook that uses the Windows Remote Management app to take a file saved locally on a server and move it to a location on a network share. I've tried using different command and Powershell options and the WRM app's built-in action 'copy-item' and none of them work.  I can run these commands and scripts locally on the server logged-in as the user that would be performing these actions through SOAR and everything works fine. I can also have SOAR move the file from a local folder to another local folder and everything works fine. It's only when I ask SOAR to move it to a network share it will not work. Examples of what I'm doing:      Move-Item -Path C:\folder\file.txt -Destination \\servername\sharename  This script will work fine locally, but will not through SOAR.    Move-Item -Path C:\folder\file.txt -Destination C:\differentfolder\file.txt This script will work fine both locally and through SOAR. I've tried mapping the drive so I can use M:\file.txt and it still fails. I've asked SOAR to run the commands directly and also have tried letting SOAR run a script that uses these commands and it will not work. It doesn't seem to be a permission issue since I'm able to do all of this locally.  I'm lost at what else I can try or what else to look for as possible issues. Thanks for any help.
I recently have taken my splunk core use
I have a message thread, these messages are coming on splunk. The chain consists of ten different messages: five messages from one system, five messages from another (backup) system. Messages from... See more...
I have a message thread, these messages are coming on splunk. The chain consists of ten different messages: five messages from one system, five messages from another (backup) system. Messages from the primary system use the same SrcMsgId value, and messages from the backup system are combined with a common SrcMsgId. Messages from the standby system also have a Mainsys_srcMsgId value - this value is identical to the main system's SrcMsgId value. The message chain from the backup system enters the splunk immediately after the messages from the main system. Tell me how can I display a chain of all ten messages? Perhaps first messages from the first system (main), then from the second (backup) with the display of the time of arrival at the server. With time, I understand, I will include _time in the request. I got a little familiar with the syntax of queries, but still I still have a lot of difficulties with creating queries. Please help me with an example of the correct request. Thank you in advance!
Good morning. We have been tracking a recent reduction in our log ingest rate. After a myriad of searching, it appears that the reduction in xml Win Event Logs occurred the same week that windows p... See more...
Good morning. We have been tracking a recent reduction in our log ingest rate. After a myriad of searching, it appears that the reduction in xml Win Event Logs occurred the same week that windows patching occurred in July of 2022. We are down by approximately 10%, maybe a little less than that. We have noted that the xml wineventlogs appears to be the only index affected. I'm concerned because this could indicate: Patching broke logging on the windows systems and we aren't getting everything we used to or should Patching made logging more efficient and we are getting the same or better/more data with less overall size Something else could be broken within Splunk itself and this is the only indication We opened an on-demand case and they found nothing wrong. We opened a support case and they told us what we could see for ourselves in the cloud monitoring console. We've continued to search and investigate, and our working theory is that patching affected the logging. We now need to know if it's a good thing (number 2) or a bad thing (number 1). My question is - has anyone else noticed a drop in xmlwineventlog volume over the last few months? Thanks in advance.
Is there an API available or some other SPL searchable way to find the Index Cluster replication factor?  I would like to create some dashboards and searches for monitoring our indexers and would lik... See more...
Is there an API available or some other SPL searchable way to find the Index Cluster replication factor?  I would like to create some dashboards and searches for monitoring our indexers and would like to be able to display replication factor.  I have been using  "/services/search/distributed/peers "  for some information but is there an API available that will tell me what the replication factor is?  This is going to be  "run anywhere" as it will be deployed to at least 5 separate environments so hard coding wont suffice.
Hi, SPlunkers,   I have a multiselect dropdown field in my splunk dashboard.    I want to select 2 options from it,  I noticed it's previewed as "value1 "  "value2",   since there is no resul... See more...
Hi, SPlunkers,   I have a multiselect dropdown field in my splunk dashboard.    I want to select 2 options from it,  I noticed it's previewed as "value1 "  "value2",   since there is no result returned. I assumed it worked as value1 AND values,  but I expected it works as value1  OR  value2. how to configure it?    Kevin
Hi,   I am running below query, however I am getting error saying relation "analytics_hca_change_indicator_event doesn't exist" even if table doesn't exist in any one of the schema | koogledime... See more...
Hi,   I am running below query, however I am getting error saying relation "analytics_hca_change_indicator_event doesn't exist" even if table doesn't exist in any one of the schema | koogledimen service=TenantPPASQuery action=AdhocQuery targetGroup="keng03-dev01-ins08-wfm19-dbs" app="Unknown_App/ppas_dheeraj_r9int" schema="_ALL_" query="select date(createdtm), count(*) from analytics_hca_change_indicator_event group by createdtm " | eval envstatus=if(like(scope, "%dev01%"), 1, 0)| eval wfmstatus=if(like(scope, "%wfm19%"), 1, 0) | where envstatus=1 and wfmstatus=1 | eval wfm_schemaname = mvindex(split(scope, "-"), -1).schemaname| eval wfm_schemaname = mvindex(split(scope, "-"), -1)."_".schema_name | chart sum(count) by date,wfm_schemaname   How to handle this scenario please?
Hi, I created a new Correlation Search that needs to generate notable, so in the "Adaptive Response Actions" I added the "Notable" with all information. Doing a manual search with the same time s... See more...
Hi, I created a new Correlation Search that needs to generate notable, so in the "Adaptive Response Actions" I added the "Notable" with all information. Doing a manual search with the same time span as the correlation search, I've got the expected outputs. The problem is that the correlation search doesn't create the same number of notables. For example: in a range time of 4 hours, the correlation search has generated 4 notables, instead, doing the manual search I've got 28 events. Doing the search "index=_internal sourcetype=scheduler" in the same time range, I found the 28 events generated by the correlation search, of which, 24 with these parameters: result_count=0 alert_actions="" suppressed=0 status=success and 4 with these parameters: result_count=1 alert_actions="notable,risk" suppressed=0 status=success Why, if I do the manual search (the same as the correlation search) I've got 28 results, instead the correlation search generated only 4 notables?   Thank you
This gives me the following warnings: PS C:\Program Files> .\SplunkUniversalForwarder\bin\splunk.exe btool --check --debug Unrecognized argument: --check PS C:\Program Files> .\SplunkUniversalFo... See more...
This gives me the following warnings: PS C:\Program Files> .\SplunkUniversalForwarder\bin\splunk.exe btool --check --debug Unrecognized argument: --check PS C:\Program Files> .\SplunkUniversalForwarder\bin\splunk.exe btool check --debug No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\app.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\learned\local\props.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\app.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\default-mode.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\health.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\inputs.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\limits.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\outputs.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\props.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\server.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\default\web.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\apps\introspection_generator_addon\default\app.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\introspection_generator_addon\default\inputs.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\introspection_generator_addon\default\server.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\apps\search\default\app.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\search\default\props.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\search\default\restmap.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\search\default\transforms.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\splunk_httpinput\default\inputs.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\apps\splunk_internal_metrics\default\app.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\splunk_internal_metrics\default\props.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\apps\splunk_internal_metrics\default\transforms.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\manager-apps\_cluster\default\indexes.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\alert_actions.conf Invalid key in stanza [webhook] in C:\Program Files\SplunkUniversalForwarder\etc\system\default\alert_actions.conf, line 22 9: enable_allowlist (value: false). No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\system\default\app.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\audit.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\authentication.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\authorize.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\system\default\conf.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\default-mode.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\system\default\federated.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\global-banner.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\health.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\inputs.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\limits.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\livetail.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\messages.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\metric_alerts.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\metric_rollups.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\outputs.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\procmon-filters.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\props.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\restmap.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\server.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\source-classifier.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\system\default\telemetry.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\transforms.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\visualizations.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\web-features.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\web.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\workload_policy.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\workload_pools.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\workload_rules.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\local\authentication.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\local\deploymentclient.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\local\migration.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\local\outputs.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\local\server.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\local\user-seed.conf PS C:\Program Files> When I try to upgrade the Universal installer to splunkforwarder-9.0.1-82c987350fde-x64-release.msi the install process hangs. But finally it went on. Iám looking for a workaround for tese warnings on my Windows Forwarder.
I have a dashboard that gets its base query from a dropdown option and that to run that base query takes the values from other dropdown and populate it and then run the search. this is a part of the... See more...
I have a dashboard that gets its base query from a dropdown option and that to run that base query takes the values from other dropdown and populate it and then run the search. this is a part of the query: <input type="dropdown" token="tokSearchOption1" searchWhenChanged="true"> <label>Select Query</label> <choice value="Orginal">Original</choice> <choice value="Filtered">Filtered</choice> <change> <condition value="Orginal"> <set token="tokSearchQuery">index=pos  | fields host,_raw | rex field=host "REG(?&lt;store_id&gt;\d{1,4})(?&lt;register_id&gt;\d{1,2})"| search store_id="$store_id$" AND register_id="$register_id$" where store_id and register_id values are rendered from another dropdown.  But when we 1st time hit the submit in the dashboard the query shows no result and this is due to the fact that it is not able to render the dropdown values from other ie. it is not taking the value from the store_id dropdown  and the register id dropdown. This happens just for the 1st time the dashboard is loaded but then after on it works fine! How to fix the issue?  
Hi, I am having some troubles to merge two searches and I am looking for the best way to do this.  We have firewall traffic with NAT that is made on two levels. My goal is to be able to identify t... See more...
Hi, I am having some troubles to merge two searches and I am looking for the best way to do this.  We have firewall traffic with NAT that is made on two levels. My goal is to be able to identify the flow with original and nated ip addresses. I explain : FW1 : src1,dst1,xlatesrc1,xlatedst1 FW2 : src2 (=xlatescr1), dst2 (=xlatedst1), xlatedst2 goal = table : src1,dst1,xlatesrc1,xlatedst1 (=xlatedst2 if it exists, xlatedst1 instead) I have made something like: search_FW1 | stats by src1,dst1,xlatesrc1,xlatedst1 | join left=[ search search_FW2 | stats values(xlatedst2) as xlatedst1 by src2] | rename src2 as xlatesrc1 | table src1,dst1,xlatesrc1,xlatedst1 But I have noticed that if src2 does not exist in search_FW1, I loose the event from my main search (search_FW1) :(. I thought that the "left" parameter of "join" should solve the issue, but it does not...  Any idea how to avoid it (and maybe optimize my search as I have seen that "join" has poor performance)? Thanks
Hi, is it possible to hide the values of the chart overlay on Dashboard Studio, to simulate a trend line?
Hi all, How do I get two fileds "ip numbers" in an timechart? I tried the aggregate fileds, but show up wrong in my visualisation of showing src and dst ip. index=firewall dest_ip=* src=* d... See more...
Hi all, How do I get two fileds "ip numbers" in an timechart? I tried the aggregate fileds, but show up wrong in my visualisation of showing src and dst ip. index=firewall dest_ip=* src=* dest_port=8090 action=blocked | eval dstsrc=dest_ip . src | timechart count by dstsrc Regards Jan
Hi Team, I am unable to open my splunk cloud rest-api URL's in my local machine. Do we need to enable something in my local machine. Please find the error below.   Thanks, Venkata Krishna
Hello,   Is there any App or Add-on for Imperva DAM logs, currently I'm getting logs in CEF format. If no, can I use Imperva Add-on for WAF logs instead? Does it work? IF so, how would be the c... See more...
Hello,   Is there any App or Add-on for Imperva DAM logs, currently I'm getting logs in CEF format. If no, can I use Imperva Add-on for WAF logs instead? Does it work? IF so, how would be the configuration required?
I saw there is responses from 2013 and 2015 you cannot rename a report. Why is this still not a thing? Is there something preventing this from being added? This seems very basic and is disappointing ... See more...
I saw there is responses from 2013 and 2015 you cannot rename a report. Why is this still not a thing? Is there something preventing this from being added? This seems very basic and is disappointing I cannot change my report without deleting and recreating it.